Listen on these platforms
Brief summary
We’re all subject to cognitive biases. And whether we’re aware of them or not, they can have a profound impact on the code we write — especially when working in an agile environment, where we have to constantly deal with uncertainties. We take a deep dive into where our biases emerge, the impacts they can have and how we can mitigate them to improve the quality of our code.
Full transcript
Ìý
Alexey Boas: Hello, and welcome to the ºÚÁÏÃÅ Technology Podcast. My name is Alexey. I'm speaking from Santiago in Chile, and I will be one of your hosts, this time together with Rebecca Parsons. Hello, Rebecca.
Ìý
Rebecca Parsons: Hello, everyone. This is Rebecca Parsons and I am coming to you from the Pacific Northwest. We're talking to our colleague today. Birgitta. Birgitta, would you like to introduce yourself?
Ìý
Birgitta Böckeler: Yes, hi, everybody, I'm Birgitta Böckeler. I am based in Germany. I am one of the technical principals at ºÚÁÏÃÅ in Germany for ºÚÁÏÃÅ.
Ìý
Alexey: That's wonderful. We're very happy to have you with us this time, Birgitta. Thank you so much. The theme this time is cognitive biases. I'm sure many people have heard of them. It's going to be thrilling to explore some of the connection and impact of them to software development and the day-to-day life of developers.
Ìý
Birgitta, I know you are very passionate about the theme, personally seen at least a couple of talks that you gave on the subject, quite excited to-- looking forward to talk to you a little bit more on the topic. We all know it's a complex topic, right? Maybe you can start by talking a little bit about what cognitive biases are. Can we start with the definition sorts?
Ìý
Birgitta: Yes. You're right, it is a complex topic. I'm a developer by trade, right? I'm also kind of an amateur at it, right? I think it's becoming more and more commonly talked about. For example, you might have heard a lot of people mention this book, Thinking, Fast and Slow, over the past few years. It feels to me like almost every second talk I see at a software development conference, somebody mentions this book in some context.
Ìý
This is also one of the sources of where this idea of cognitive biases comes from, which is behavioral economics. The author of the book, Daniel Kahneman, was one of the originators of the school of thought. Basically, the context is that, as humans, we usually like to think that we're very rational, right? In the cases where maybe we're not rational or logical, it has to do somehow with emotions, that we're being emotional. That's why we can think rationally or logically.
Ìý
What Daniel Kahneman and his colleagues and people in the field and around the '60s, I think, came up with what's this theory that maybe we're not that rational, as rational as we like to think we are, but that there are these things happening in our brains, these little errors, these little bugs, you might say that kind of affect our thinking and make us sometimes make errors, little errors in judgment. I'm introducing these as bugs or errors, but on the other side, they're actually also "features" because they kind of keep us sane.
Ìý
They help us deal with the world with all of the information that is always coming at us, and our brain always has to take so many quick decisions and interpret everything that's going on around us. Those things usually keep us sane, but then there are situations where we also make little misjudgments and it's important for us, to know that maybe we're not always as rational as we think we are.
Ìý
Alexey: That's great. You're right. It's impressive, how often the topic comes up in contexts you wouldn't imagine. Then how does it connect to software development? How is it relevant to software projects or to our context in general? How does it materialize?
Ìý
Birgitta: Software development is a discipline where we have to deal with a lot of uncertainty. We can see that, for example, in the need for agile development and the principles of agile development because it's all about embracing change, right? Kind of recognizing that change is going to come at us all the time and new information and new context.
Ìý
With working in an agile way, we try to prepare for that change all the time and be resilient to it. I think it kind of nicely shows how much uncertainty we have to deal with. A lot of the cognitive biases actually come from our inability or not great ability to deal with uncertainty. I once read about this study that said that uncertainty is actually even more stressful for us than knowing that something bad is definitely going to happen. That's how bad we kind of are at dealing with this.
Ìý
As I said, us having a hard time dealing with uncertainty and ambiguity leads to a lot of these cognitive biases kicking in. We have to kind of-- maybe not always be aware of them, but if we think about things like architecture, there's this neat little definition of architecture that our colleague, Martin Fowler, I think uses a lot, that architecture is the stuff that is hard to change later.
Ìý
Especially when we're making decisions about architecture or about things where we think they're going to be hard to change later, I think in those situations, it's especially important to be extra aware of where we might be making these little errors in judgment and maybe try to mitigate them a little bit.
Ìý
Rebecca: Well, I guess I would say too, one of the things that we've really learned over the last several years is the extent to which software development truly is a social activity. We need to talk to people, we need to try to understand their problems, and there's a lot of people interaction. These biases exist in humans. So, in any significant human activity, we're going to have to deal with the existence and consequences of these biases. I find it quite interesting to think about, "Okay, so what would the implication of something like this be in software development?"
Ìý
Alexey: Yes, that's quite true. It's interesting. I'm no expert either, but as far as I know from what I read, it helps a lot in dealing with quick decisions in the face of uncertainty. You need to decide that you need to run away from the lion, and then the biases kick in and just help you make a sane decision that will protect you.
Ìý
Many times, we need our rational brain and deeper thinking in places, and full function to understand the complexity of situations we have, and software development is definitely one good example. Can we dive into a couple of examples? Birgitta, can you share with us, I don't know, maybe some of the biases or a couple of stories, and some examples of what you've seen over your career.
Ìý
Birgitta: Yes. We can maybe start with the area of decision-making. I already just talked about architecture decision-making, and maybe in that area, it's especially important to be aware because of decisions that might be harder to change later. One of my favorite biases in this area that really blew my mind a little bit when I learned about it is called the outcome bias.
Ìý
This one basically is about us often equating the quality of a decision with the outcome of the decision. To give an example, let's maybe start with the area of decision-making because I already just talked about how in architecture decision-making, it's especially important to be aware of these things because we might be taking decisions that are harder to change later.
Ìý
One of my favorite biases in this area is called the outcome bias. That says that often we make the error of equating the outcome of a decision with the quality of the decision. Basically, we take a decision. At some point, in the future, we have hindsight, and we know the outcome of the decision, if actually what happened after that was good or bad. Then we often say, "Oh, that was a good decision," or we say, "That was a bad decision," but often purely based on what the outcome was, but the outcome doesn't necessarily say anything about how good or bad the decision-making process was because that depends a lot on the information as well that we had at the time or how we approached making the decision.
Ìý
An example maybe is, when you choose a framework, when you choose a technology you want to use, I had this case once years ago, when React was still quite new, and my team wanted to choose a library to implement state management and this flex architecture pattern in React. We chose this library. It was still quite early days in React. Then a year later, turned out, we had actually-- "the wrong choice". We had taken the wrong choice because there was another library that came up, Redux, which is still quite popular today, that turned out to be the winner among the open-source libraries coming up at that time.
Ìý
When we made the decision, we probably couldn't have known that based on the information that was there. There were different libraries around, they were at similar stages of maturity, so you could say in hindsight, "Oh, that was a bad decision. Now we have to all migrate to this more mature library." At the time that we took the decision, maybe our process was actually okay. We did the research, we did a spike, and that was a good approach that maybe I would use again.
Ìý
An opposite example about technology choice might be, your cousin sister works at Netflix and uses the technology, and you hear about it at a dinner table. You're like, "Oh, yes, we should try that as well if it's good for Netflix." You use the technology and then actually it works really well for you, and then you say, "Ah, that was a good decision that I used that," but was it really a good decision? Did you really do the research? Did you check if this technology fits your use case?
Ìý
If for the next decision you take the same approach again, then you didn't really learn from the decision. It's certainly important to take into account what the outcome was. I think we often jump over the step of analyzing how we took the decision and what information we had.
Ìý
Rebecca: Very often we don't attempt to understand why we ended up where we did in a particular situation, in particular, when something goes wrong. It's actually one of the things I liked about this notion of a blame-free retrospective is you can say, "Okay, we did all of the right steps. Now, what was it that caused this ultimately to be the wrong decision? Well, something unanticipated happened. Was it really unanticipated, or was there something that we missed?"
Ìý
If it really was unanticipated, then there was no way to get a right answer. You couldn't have ever got to the right decision because there was no way to have the information that ultimately would've swayed the decision in the right way. I think it's difficult for us sometimes to really try to dig in and it's much easier to just move on. I agree with you, I think this is a fascinating bias and I think it does tell us a lot about the value of truly introspecting the consequences of the individual decisions that we make.
Ìý
Birgitta: I also like that you mentioned blameless postmortems or blameless retrospectives because I also think being aware of this bias, not only as an individual but as a group, or the other way around, if we do this, if we fall into the trap of the outcome bias, it often reduces our compassion for ourselves and for others. For ourselves, maybe beating ourselves up about the result of a decision, even though we put a lot of effort into it and did a lot of research and got a lot of information, or also I think it's quite common in tech actually for people to point fingers at each other for-- in hindsight, when you know the outcome, say, "Oh why did they do it this way?" after something crashes or after something doesn't scale up to an unexpected spike in users maybe.
Ìý
I think that often actually leads to overengineering because if there's a culture of finger-pointing if you fail without people actually questioning further, "Why did they do this? What information did they have?" then you don't feel safe maybe engineering just enough for the moment, but you feel like you have to make a lot more certain that it will go well because maybe you're afraid of your peers' judgment because you already expect them to not question how you took the decision.
Ìý
Alexey: That fear of judgment is really strong, and I guess a lot of the power of blameless conversations and blameless postmortems come from-- it's because it helps you see those kinds of things in a social context. One interesting phenomenon that can happen is just you select the information. If I'm not mistaken, that's the confirmation bias because you will select information, the data based on your previous beliefs, and then when you look at the made decision, you will see, "Oh, this worked out well. Look, the decision was great, and this worked out." Maybe even the result wasn't as good as it seems because you were selecting the information based on the belief you were going forward.
Ìý
Many times once-- that might be another bias in itself, but I'm not sure, once someone has made a decision and declared it, the person is also attached to that decision to some extent, so it's really, really powerful to really create those dynamics in which those learning conversations can happen in a fear-free environment after all.
Ìý
Birgitta: There's another bias attached to this as well that I think if you're actually trying to investigate further, "Okay, was this a good decision or bad decision," we also have to be careful of self-serving bias because we might then say, "Oh--" whenever there's a bad outcome, we're like, "Ah, this was bad luck," and whenever there's a good outcome, we say, "Ah, that's because I took such a great decision," or the other way around.
Ìý
We have to try and be honest with ourselves as well there, and all of this is related to-- the goal is to learn from our decisions and learn from the past. I read a lot about this outcome bias and self-serving bias, the connection, all of those things in a really great book that I would also like to recommend, it's called Thinking in Bets by Annie Duke. She's a former really successful poker player.
Ìý
She talks a lot about analyzing her poker game and her decisions that she made throughout the game and batting on things and telling herself afterwards, "Ah, these were really good decisions," because she won. "That was my skill," or when she was losing, it was like, "Oh, I just had bad luck." Then going through that and learning more to really investigate how she was playing and improving her game as well by trying to overcome outcome bias as much as possible and self-serving bias.
Ìý
Alexey: Birgitta, we've been talking about decision-making. There's one topic I'd like to hear from you about. When we have architectural decisions or you did mention the things that are hard to change, as Martin likes to put it, when it comes to those kinds of thing, we've been trying with agile software development to create different techniques for those kinds of things like evolutionary architecture and lots of other techniques, so how does this topic relate to biases? Are some biases that kick in, in those moments of long-lasting decisions?
Ìý
Birgitta: I think it's not exactly like a bias, and by the way, there's all kinds of terminology in that space that I found; biases, heuristics, sometimes the same thing has different names. Some are more researched than others, so if any of the listeners actually try to dig deeper into this topic, be careful, there's lots of stuff out there, but there's one thing in psychology about need for closure.
Ìý
There's actually a quote from this book I mentioned earlier, Thinking, Fast and Slow, that I noted down before the recording today and it says at some point that "sustaining doubt is harder work than sliding into certainty." It's similar to what I mentioned in the beginning about how we're having a hard time dealing with uncertainty. Then this need for closure often pushes us into making decisions because we just want to reduce the ambiguity, and the more decisions we already take today, the more certainty there seems to be.
Ìý
I think that might be one of the reasons that makes it really hard for us sometimes to go in steps and ask the question, "Oh, do I have to take this decision now?" Rebecca, in the Evolutionary Architectures book, you all write about the latest responsible moment for decisions, and as one of the techniques to achieve that. I think sometimes it's really hard for us to do that.
Ìý
Rebecca: Well, and it is one of the very common questions we often get is, "Okay, you're telling me I need to wait till the last responsible moment. What is that? How do I know when that moment is?" It's that related question of, "How long do I have to wait before I get to put this uncertainty behind me?" I think one of the things that we try to emphasize in evolutionary architecture relative to this is that the last responsible moment is really determined by how critical this decision is to the things that are going to determine the success or failure of your architecture.
Ìý
One of my favorite examples in this is I was working on a trading system early in my ºÚÁÏÃÅ career, and people hear "trading system" and they think, "High throughput, low latency, got to worry about performance," but in this particular trading application, they would maybe do 100 trades a day. Anything that had to do-- and yes, I did say a day, anything that had to do with performance and such was irrelevant. What they really cared about was never losing a message.
Ìý
Decisions around the communication architecture and how we kept track of messages and how we knew when things got stuck, those were the decisions that were really going to impact the success or failure of the system. That was where you wanted to concentrate. I think this, to some extent, helps mitigate to some extent this need for closure. It's like, "Well, but I don't really have to worry about that one. I need to worry about these. Now, let me put my stress over these decisions here." I do think there's definitely a relationship between that need for closure, and what we talk about in evolutionary architecture.
Ìý
Birgitta: I think that also leads to one of the things-- we haven't really talked about it yet, but there's, of course, certain mitigations, or things in the way that we work that we can do to help us tackle some of these biases. One that helps with a lot of them is always thinking about how can I get more information. Is there a chance I can get more information? Can I learn more about reality somehow?
Ìý
Whenever we jump to conclusions too quickly or want to decide quickly, it keeps us from getting more information and learning more about reality. We have to stop that at some point, of course, but that is definitely one of the things when I feel myself shut down and not wanting to learn more, then that's sometimes a warning sign. Also, with this need for closure, it sometimes becomes especially hard to tackle the bigger group of people that is working on something.
Ìý
If you're a small team of four or five people who trust each other, it's easier for us to defer a decision because the context is also smaller and maybe we think it's more probable that we'll think about it later. The bigger the group of people, the more complex everything is, yet the more we feel like we have to take decisions and also put in these guardrails for everybody, which often leads to this over-governance. As an individual, I feel I need to make this problem space a bit more safe. So I just take these decisions, and then tell people what to do, for example.
Ìý
Alexey: Birgitta, any examples closer to coding that you've seen or that come to mind?
Ìý
Birgitta: Yes, when we debug, that is also an area closer to coding where I've certainly often felt like sometimes-- we've all had this bug that we've just looked into for forever. At some point, you feel like you don't even know anymore, the things that you tried. You just tried this and that and that, and just can't figure out what it is.
Ìý
There's a few things probably that can trip us up here. For example, also quite, where, availability bias or availability heuristic that says that it's a tendency of ours to think that if there's something that's easy for us to remember, that's easy to recall, that it is also important or it's maybe more true. I had this example a few years ago of where a developer had just joined our team. They had worked in another team across the hall before, and they joined the team and they were debugging something in our application. They had just recently for days debugged another problem in that other team. They pretty quickly came to the conclusion that it must be the same thing, even though that bug that they had had on the other team had actually been like a freaky little thing that wasn't even that common, but they spent a lot of time trying to prove that this other bug was a similar thing.
Ìý
I think it was something with encoding in the browser or something. I don't know. I think that also happens quite frequently that we just remember, "Oh, I just read this thing," or, "This just happened to me," or it's also often the more extreme stories that are easier to remember. Then sometimes, apparently according to this bias, we then also think that it's more important, but they're actually the extreme cases, the averages sometimes are not even that easy to recall.
Ìý
Alexey: How do we manage that? We talked a little bit about-- when we talk about evolutionary architecture and the latest responsible moment, can we get more information, and as a way to try to help us see if we are under the influence of a bias, are there any mitigation strategies or ways to deal with them? Maybe, maybe not. That's an interesting point. Any advice on that, Birgitta?
Ìý
Birgitta: In the debugging case, I think I have certainly started to be a lot more rigorous with myself when I debug. I really try to make a plan. I try to apply the scientific method. Try to come up with my hypotheses, what could be the thing? Sometimes I even look at the hypotheses and try to see, which are maybe the more probable, the more common ones because much more often a bug is just caused by the thing that we change last, or in terms of-- it's like Tomcat running out of memory, which I've seen countless times.
Ìý
That's much more common than some freakish, little weird-- it's much more common often that it's like Tomcat running out of memory than a rare condition. I look at the hypotheses and then maybe I exclude some just by probability, and that's actually another whole area of biases, probability, statistics. Our brains suck at statistics. That's also another one. Whenever something is about statistics, I try to not believe what my brain is first telling me, but I try to investigate a little bit more.
Ìý
Yes, scientific method, my hypotheses, how can I try to prove them, and then I really try to write them down, so I don't get into those rabbit tools of debugging. We talked about gathering more information, and then, of course, another thing in software delivery is just agile principles and methods because, as I mentioned in the beginning, it's about embracing uncertainty, embracing change, and that's what those methods are there for. Iterative development, going in steps, and again, that is also about gathering more information. Yes, I think all of those agile principles also help us.
Ìý
Alexey: Yes. The conversation about retrospectives feel like another great strategy about that; so bringing in more people and having open conversations, maybe you can elicit allies that will help you see biases in yourself. I think it's easier for you to see biases in other people than in yourself. I'm not sure if I've read that somewhere, but that sounds true. Eliciting other people's support feels like a nice strategy. Does it make sense?
Ìý
Birgitta: Yes, definitely. I can't remember the name right now, but there's a bias about when you know a lot about cognitive biases that you think that you're more immune to them, and then you become overconfident about them. I think another really important aspect of this as well, which is confidence. On the one hand, diving into these biases can actually make you second guess everything and can make you super insecure.
Ìý
That's something to balance definitely, and as I said in the beginning, we also need them. We need to feel lucky that we have them because they make us deal with the world, but when it comes to this confidence, still saying, "I'm not sure," asking questions, thinking that it's okay to change our minds. Those are also really important, again, for trying to see reality as much as possible, so we can take better decisions, build better software, and be more compassionate with the people around us.
Ìý
I think this type of behavior, saying, "I'm not sure," asking questions, changing our minds, is actually often associated with lower confidence, and it's actually often seen as a bad thing. People will give you feedback that maybe, "Oh, you should be more confident." I think it's contextual. I think there's no, like, "Okay, I am confident at level 80% or something," but I think we need to adjust it based on the situation.
Ìý
In those situations where I take important decisions, I actually want to be less confident in the process of taking the decision while I get to my-- and then maybe increase my confidence of course. If I feel like immediately I know exactly what to do, but it's a really important decision, then I at least want to double-check and see what's going on.
Ìý
Rebecca: Yes. I think that's an important point. Another mitigation that occurs to me, and Alexey actually alluded to this, is getting more people involved. It reminds me of the book by Thomas Kuhn The Structure of Scientific Revolutions, where he talks about how these paradigm shifts within science occur. He's actually the person who coined that term "paradigm shift".
Ìý
One of the things he said is that very often the new theory that emerges comes not from somebody in the discipline itself, but from an adjacent discipline because they are looking at the problem in that slightly different way. They are bringing their own sets of biases and perspectives into that decision-making, but because it's slightly skewed from that of the people who are deep into the problem, they can see things that others can't.
Ìý
It reminds me of a debugging example once very early in my career. Someone who was working for me came in and she said, "I just can't figure out what's wrong with this code." I asked her to just tell me what the code was doing line by line. She got to the line that was wrong, and she said, "Thank you so much, you're so smart, you helped me figure this out." I was like, "No, I just forced you to look at it and tell me what it did," but she was so blinded because she thought she knew what it did, that she couldn't see the problem.
Ìý
Birgitta: It's rubber ducking, right? Rubber ducking. Another great strategy against cognitive biases. It's great that you say that about looking for other people to keep you in check or keep you honest, kind of because that's also what Annie duke, the poker player I talked about earlier, she also talks about that a lot in her book about having a group of people, other poker players to discuss with and to analyze your game and they keep you honest to, for example, not fall into self-serving bias, but really call yourself out for misjudgments that you did.
Ìý
Alexey: That's great. I guess we're coming to the end of the episode. So, maybe just ask you, Birgitta, if people want to know more about this, where should they look for you? You did mention Kahneman's book, Thinking, Fast and Slow, Annie Duke's Thinking in Bets, any other sources you recommend for people who want to dig deeper into the topic?
Ìý
Birgitta: Yes. Thinking, Fast and Slow, I would say, is a really one of the big ones there, but also warning, it's quite dense. You cannot read this before going to sleep in the evening. I still haven't finished the whole book, to be honest. It's quite a big one, but every time I read a bit of it, it blows my mind. It's really great. Annie Duke, Thinking In Bets.
Ìý
There's also a book called The Art of Thinking Clearly by Rolf Dobelli. That's also quite nice because it's basically structured by-- there's a few pages for bias, and then he gives an example. You can also jump around in the book. Then there's a podcast, You Are Not So Smart, that I also quite like, and the host of the podcast also wrote a bunch of related books. I forgot the name of the host at the moment, unfortunately, but yes, those are some places to get started.
Ìý
Alexey: Well, it was a great conversation and great to have you with us. Thank you so much.
Ìý
Rebecca: Yes. Thanks, Birgitta.
Ìý
Birgitta: Thank you too.