Listen on these platforms
Brief summary
We catch up with Dave Farley to hear about the genesis of his blockbuster book, Continuous Delivery — which he authored with Jez Humble — as well as his latest tome, Modern Software Engineering. He shares his ideas about the art of software development and common misconceptions about the principles of engineering.
Full transcript
Ìý
Mike Mason: Hello, and welcome to the ºÚÁÏÃÅ Technology Podcast. Today, I am joined by my co-host, Neal Ford. Good morning, Neal.
Ìý
Neal Ford: Good morning, Mike.
Ìý
Mike: Together, we are joined by award-winning author, Dave Farley, who has had a long career in technology. Good morning, Dave.
Ìý
Dave Farley: Morning, Mike, morning Neal.
Ìý
Mike: You might know Dave as the author of the Continuous Delivery book, along with Jez Humble and a whole bunch of other stuff. What we wanted to do today was talk to Dave about his career in tech as part of our series of podcasts interviewing technology industry luminaries and talking about what they've done in their career.
Ìý
I thought where we would start actually is at ºÚÁÏÃÅ because Dave did use to work for ºÚÁÏÃÅ. That's where I first knew him in the UK. I'm not sure exactly which year that would have been, Dave. I joined ºÚÁÏÃÅ in 2003 so somewhere between 2003, 2010, somewhen then. You were at ºÚÁÏÃÅ in London?
Ìý
Dave: Yes. I joined in 2003 as well and I left in 2007-ish, something like that to go and work-- Yes, I got lured away by a friend. I had a great time in ºÚÁÏÃÅ and had a lot of fun and learnt a lot of things.
Ìý
Mike: Should we talk about Continuous Delivery, the book, because I think that's one of the significant early piece? When I look at your blog page with all the stuff you've done on it, the CD book I think has been huge for the industry. Can you tell us how that came about?
Ìý
Dave: Sure. It was a very, very pragmatic way of writing a book really. At the time, I joined ºÚÁÏÃÅ actually to try and help Rebecca Parsons escape from a client who loved her to bits, but she was working in the wrong part of the world. I was hired to come in to try and replace Rebecca as the tech principal on the project that she was running, in part. They were hiring me for other reasons too.
Ìý
I started working there and it was a difficult project. We were building in it a point of sale system for one of Great Britain's biggest electronic retailers at the time. It was challenging. We were deploying to tens of thousands of sites and doing all sorts of cool things. We in ºÚÁÏÃÅ, at the time, believed it to be the biggest agile project that had been tried so far.
Ìý
We hit all sorts of challenges. ºÚÁÏÃÅ was very invested in extreme programming, certainly ºÚÁÏÃÅ in the UK where I was based. I had a background in very agile systems in building complex systems, and using versions of extreme programming prior to ºÚÁÏÃÅ. So it gave me some of that experience and approach.
Ìý
We were running into trouble. We were running into the difficulties of scaling up extreme programming, in which at the time was seen as a discipline for small teams. We were pushing the boundaries a bit. We started learning some lessons about other ways of capturing this, thinking about it, and so on and a variety of different pieces came together. I'd had some prior experience of a project that was more challenging than I wanted it to be because we didn't correctly control the infrastructure well enough.
Ìý
I'd got, I suppose, the seeds of doubt about [chuckles] traditional ways of dealing with infrastructure at the time, which led me to thinking a little bit in terms of infrastructure as code, as the direction I suppose. We were doing a pretty good job of fine-grained test-driven development. We were doing a pretty awful job on this project of broader testing strategies that told that the application as a whole work together.
Ìý
One of the early lessons from my point of view in Continuous Delivery was recognizing that you needed developers in the feedback loop for all automated testing. If you don't close that feedback loop, that's a big problem [chuckles] in terms of the way of organizing development function. There were a bunch of smart people working on that top team, and we did some nice things to start to position this and square it up.
Ìý
Then we started talking more broadly within ºÚÁÏÃÅ. A lot in the UK at this stage, because we were working on projects locally and stuff like that, there were a number of ideas that we were-- patterns that we were starting to see applying this kind of very agile approach to developing software in, at least as far as we could see, more challenging circumstances.
Ìý
There's a whole bunch of us, Dan North, Julian Simpson, Chris Reed, Jez, me, lots of other people who started talking vaguely about writing some kind of collective book together. We'd all got different ideas that we were going to put in to do an anthology kind of book. I started writing and Jez started writing and nobody else started writing.
Ìý
Dave: After a while, we looked at this and thought, so what should we do with this? By this time, I'd come up with some patterns around ideas, like deployment pipelines and stuff like that, and started to synthesize those. I'd talked about those in a couple of conferences and stuff. That seemed to be a useful organizing principle for the book. That's the spine of the book really.
Ìý
Jez and I started putting together-- we were being encouraged for a little while to various people but, eventually, we decided, well, we'll just go ahead and we'll do this. It took a long time. I'd left ºÚÁÏÃÅ by the time we finished the book. We spent about four years writing the book. It was a pretty, pretty major work from our point of view. We're doing it all in our spare time and so on.
Ìý
One of the amusing things is that I think that, during the period when we were writing the book, Jez and I were on the same continent for less than a month over the four-year period that we were writing books. We were doing all this. We had a continuous integration for the book. We stored the book in version control. We would do a little build to render the book to make sure that it all worked and all that kind of stuff so that we were doing all that kind of stuff. We're eating our own dog food.
Ìý
I can remember having a conversation in a pub one of the times when we were in the same country with Jez and him saying, "Oh," we were both musing on whether anybody would be interested in reading a book like this. At one point he said, "Well, I’d count it as a real success if I could afford to buy a shed out of the proceeds of the book." [chuckles] I think he more than got his wish.
Ìý
Neal: I think it's interesting that you trace the lineage of the XP engineering practices into Continuous Delivery. I've often called Continuous Delivery like the grown-up version of XP, because it's for bigger organizations and extreme programming always had too much of an ESP and eight feel for a lot of enterprises because extreme sounded too extreme.
Ìý
I'm really curious as to where did the name come from. Because if I had written that book, I would've given it some terrible name, like effective engineering practices for software development or something, but Continuous Delivery is nice and tight. You can set all that. Do you remember where the name came from?
Ìý
Dave: It's funny you should say that. We agonized over the name for a long time. Just reinforcing your points on the XP link, absolutely 100%, I think that Continuous Delivery is the second generation of extreme programming. It extends the scope a little bit from the way that extreme programming talks about it. I'm sure that some of these ideas were in Kent Beck's and his team's heads to some extent.
Ìý
We extended the range and formalized it, but absolutely 100% second generation XP. I don't think Continuous Delivery, as we recognize it today, would exist had XP not be-- it wouldn't have been the same thing.
Ìý
The name, for a long time, I was really annoyed because I wanted to call it continuous integration. I thought that this was the full end-to-end story of continuous integration. There was a book that was released that was called Continuous Integration. I was like, "Ah, damn." It was in the wrong time that just meant that we couldn't use the name for the book. I was annoyed for that for a while. We were batting different ideas around for names and we got all suggestions. We were doing the brainstorming stuff like that.
Ìý
We got Martin Fowler involved in the conversation. It wasn't really his idea, but it was his suggestion as my recollection. At some points during the conversation, he said something like, "Well, what he's really talking-- The agile practice that he's really talking about is continuous delivery, but that's not the right name," or something like that. I said, "No, that is the right name." That's focusing on what it is that we're really trying to achieve.
Ìý
We're trying to work so that the software is continuously deliverable all of the time, and that's what we're about. It seemed like exactly the right name. As soon as he suggested it, it snapped into place I think. That's my recollection anyway.
Ìý
Mike: Of course, the book was hugely successful, right? Like you alluded to just being able to afford a garden shed from the proceeds. It won the Jolt Award one year. I can't remember all the books that it beat out, but it beat out many, many awesome books that year to win that award and went on to have a huge following an impact on the industry. This is a bit of a double-edged sword, but I feel like this is one of those things, nobody would admit they're not doing continuous delivery these days.
Ìý
Dave: Yes, I think that's pretty close to the truth. The book was enormously successful, much more successful than either Jez or I ever anticipated. We thought a few nerdy people might like it. It's where we were, but it turns out there's quite a lot of nerdy people maybe, I don't know. [laughs] We won the Jolt Award. My publishers recently told me that the book won a place in the top 25 books on software for all time. [laughs] There was a survey which is ridiculous. It's absolutely crazy.
Ìý
Mike: It really speaks to, I guess, the nerve that you hit or the thing that was underserved, because I think the IT industry sometimes does not do a good enough job of empathizing with "the business" and all of that piece of the world.
Ìý
To me, the difficulty in getting software out the door was one of the key reasons that people would look at the IT nerds and say, "What the hell are you doing? Why is it so hard to do all this stuff? What do you mean you need another million bucks to upgrade the database?" All those kinds of things that IT always gets slapped with. To me, that was even, maybe not all of the guts of the detail in the book, but the concept was important even to non-developers as well.
Ìý
Dave: I think that's closing the-- or not necessarily closing the loop, but enhancing the themes that we've picked up on so far. I think that's one of the reasons why it landed and became so popular as an idea. Because as soon as you start using the right sets of words in the right context, it's easy to have the conversation.
Ìý
You go to a non-technical CEO in a business that don't think of themselves as being software led and say, "Wouldn't you like your software delivery function to continuously deliver value to your customers?" They're going to say, "Yes," of course they are. Of course, that's what everybody wants. The technicalities of making-- the practicalities of making that work are very, very far reaching.
Ìý
One of the things that often gets missed in the Continuous Delivery book is things like Jez and I talked about using this as a tool to evolve your products. It's not just about deployment, automation, or test automation. It's the strategy that you use to build great products if you're a business. It's that kind of book. I think it's a book of ideas. It's a very broadly drawn-- a scope in the book, which we were scared of. We were nervous.
Ìý
As we started coming towards the final draft of the book, we were saying things to each other in private like, "We've got a methodology here. That's scary." That's not where we thought we were going. We thought we were going to be writing about the technical details of doing continuous integration well. We ended up in this place. Now, I feel much more comfortable with that because, now, we've seen it work.
Ìý
Tesla are a continuous delivery company, and they do it not just for their software, they do it for the cars. They do it for the factory for goodness sake. There was a recent update in Tesla. They upgraded the production line so that the maximum charge rate of a Tesla Model 3 went from 200 kilowatts to 250. That involved a physical change in the routing of some cabling in the engine, in the motor, the power distribution system and all that kind of thing, but it was a software change. It went through, they passed the relevant tests in their continuous integration cycle.
Ìý
The factory was reconfigured in three hours. In three hours, it was spitting out cars that were different to before. This isn't just about simple software changes. This is quite a big idea. I've come to think about this in different terms now. I am a nerdy person, and I've always been obsessed and fascinated by science as a hobbyist. I'm an avid reader of physics and science in general and loads of things. That informs the way that I think about things.
Ìý
I did think about-- I do tend to used that reasoning and apply it to my work and my job and did then when I was doing my part of writing the book. I didn't really analyze what was going on until afterwards. Then with hindsight, I think that one of the reasons why Continuous Delivery is important to software development, the industry, and as a result to the world is because it's an application of scientific style reasoning to solving practical problems in software.
Ìý
It's a pragmatic informal approach to applying those things in many ways, but it's also very disciplined. It uses, to my mind, some fairly deep scientific principles which is a theme in my work these days, I think. As a result of that, it works. Science is humanity's best problem-solving technique, and so it works. Your thinking about most organizations’ days would claim to be practicing continuous delivery in some form. Most are probably not yet, but most aspire to it. Winning hearts and minds to that degree is the first step maybe to helping move things on.
Ìý
There are some organizations that might be mentioned already that are flying. I regularly say this in public, and I'm starting to say it now without being embarrassed by saying it, is that I think that this is state-of-the-art for software development. I think this is as good as we know how to do at the moment.
Ìý
Neal: I think we both agree with that and love to see it continue to gather steam. This is actually a great segue talking about applying science to software into the thing that lured you away from ºÚÁÏÃÅ, which I think was LMAX.
Ìý
Dave: Yes, that's right.
Ìý
Neal: That may be the other reason you have heard Dave's name is because of LMAX. Can you describe a little bit about LMAX and what problem it solved in an interesting way?
Ìý
Dave: While I was in the midst of writing the Continuous Delivery book, an old friend of mine, who I considered to be one of the best software developers in the world, Martin Thompson, came to me with a proposition and lured me away. He said, "We've got this problem and nobody knows how to solve it. We've got a blank sheet of paper, and we don't know how we're going to go about solving this problem."
Ìý
He knew how to push all of my hot buttons. Two weeks before this, I'd had my career review at ºÚÁÏÃÅ and said, "This is the best job I've had so far, and I'm really enjoying myself. I've been here for five years. I can see myself being here for another five years, very easily," all that stuff.
Ìý
Two weeks later, Martin Thompson's in my ear trying to sell me on this idea of solving problems that nobody knew how to solve, these world-class problems. That went, "Ooh, that sounds interesting."
Ìý
He wanted to hire me as the head of engineering for this project. The project was to build one of the world's-- if not the world's highest performance financial exchange, and to liberate trading so that anybody could turn up over the internet, $10 in their pocket, put it into the account. They could start trading on an equal footing, on an equal basis with anybody else.
Ìý
Mike: This is 2010, let's remember, before-- 2007, sorry. Way before all of the current trend of hooking people on the Robinhood app and all that kind of junk that, where we've got all this retail investor trade, so this is much [crosstalk]
Ìý
Dave: Yes, it's a long time ago. We got this big problem, and it was funded by one of the sports betting companies, Betfair, that had done very well initially. Later, it spawned off, but it started there. We got some ideas to build this exchange. He hired me as head of software development. I was the first hire who wasn't one of the founders of the company, one of the directors of the company. I was hired as head of software development. Part of my job was to build a development team and the development for this project.
Ìý
We started hiring people, and I was full of Continuous Delivery because I was in the midst of writing the book. I had all of this great experience working at ºÚÁÏÃÅ on projects using these techniques. I knew how we were going to organize the team. We built the company really on a continuous delivery basis.
Ìý
At the time, the more business-oriented people were away trying to raise funding and capital and position of the business. I came in, Martin Thompson was there as the CTO. I was the head of software development, and we worked very well together. We made a good team. We started hiring people and everybody that we hired, we were saying, we're going to be doing test-driven development. We're going to be doing this strange thing called continuous delivery. We're going to be doing continuous integration on steroids.
Ìý
All of this stuff's going to be highly automated. We're going to be very rigorous. We're going to be doing very high quality, and we're going to be building the world's fastest financial exchange in Java. We surprised everybody. We started working and building this stuff. We applied these techniques.
Ìý
It was during the course of that experience that I deepened my thinking about applying these sorts of scientific style reasoning and engineering thinking to solving problems. I started to clarify my thinking further and refine that further over my time there in terms of what some of the lessons that I could pick up from that were.
Ìý
Mike: Dave, can you just expand on the problem a little bit for people? As I understand it, part of the problem is, with an exchange, any trade will affect subsequent prices of any other trade. Therefore, you can't parallelize the problem in a way that people are maybe used to.
Ìý
Dave: Yes. It's a really interesting problem because what you've got in an exchange is that, in any given moment, there's a price. Everybody in that moment who is trading is competing for that price. All of the load concentrates down into this one point. There are two parts of that problem that are interesting. One is the traders coming in trying to get that price quick. They want fast, efficient processing.
Ìý
Also, there are organizations that make money from trading called market makers whose job is to keep the liquidity in these kinds of trading venues. What they want to be doing is that they want to be able to evaluate something, whatever it is, how much your cup of coffee's worth. At any given point, they're going to try and price that and say, at the moment, this is worth this amount of money. There's going to be some inaccuracy in my calculation of that value.
Ìý
I'm going to charge you an extra bit to buy something and an extra bit to sell something in opposite directions. That's called the spread. What market makers do is that they make money on the spread. They don't really know what the price is and they don't really care whether you buy or you sell either one. They're not betting on whether it's--
Ìý
All they're trying to do is get the price in the right place so that the difference, as long as there's a roughly even amount of buying and selling, they'll make money on the difference between the two prices. It's like, when you go and you buy money to go abroad, you pay a bit more to buy the money and you get a bit less when you sell the money back to the people.
Ìý
It's the same problem. In their interest, what they want to be able to do because they're competing with other market makers, is that this price is moving all of the time. They want to track the price. The further at in time, the wider the spread, and the less appealing that is to people that want to buy or sell things. There's a tension between wanting a narrow spread so that people get the best price when they're buying. You're not paying too much for your holiday money or getting too little when you sell it back to them at the end.
Ìý
You want the spread to be narrow. For the spread to be narrow, you don't want your predictions to be going too far ahead in time. You've got this very, very narrow time horizon. Modern exchanges are changing that price probably once a millisecond, maybe less frequently than that, but that's the ballpark. In order to be able to do that and to be able to do that at any scale, with more than one, you've got to be able to go faster than that.
Ìý
The peak load, one of the things that we learned by measurement with our engineering discipline, is that averaging out these performance things it's no good at all because sometimes the peak is millions of times, millions of transactions per second for very, very-- The equivalent of millions of transactions per second over very narrow time horizons.
Ìý
It's a really interesting problem. We stepped slap bang to the middle of it. You've got the very high-performance demands of trading, this narrowing of focus to a single price. Then we wanted to make that available over the internet. We were hoping then to be able to scale this out so that anybody, hopefully, thousands, tens of thousands, hundreds of thousands, maybe even millions of people, could go onto the internet and trade against this. That was unheard of at that point.
Ìý
It's not the kind of thing that you can parallelize out. In some places you can, but not at these points of the matching, because that affects everything. The price at which a bid is matched with an ask, that's got to be on one thread. That's got to be in one place. We came up with a variety of strategies that we thought would be fast.
Ìý
We started out trying to build something called a staged event-driven architecture, in which computation is broken out into discrete nodes. Any given context is always evaluated on the same thread. If you updated your accounts, your account would always be updated on that thread. There was no sharing of data across threads in that model.
Ìý
We built that system. We built a version of the exchange based on that. We were doing performance test-driven development as well as everything else, as parts of our deployment pipeline. We ran our tests, and we were nowhere close. We knew what the threshold of the game is. We were very accurate in terms of we need to be able to do 10,000 messages per second with no latency longer than two milliseconds in response. That's what we were shooting for as a starting point.
Ìý
We were nowhere close to that on our first go in this staged event-driven architecture. We started more detail profiling. What we found was that we couldn't see the business transaction in our profile. Everything else was way more costly than the business transaction. That's normal for software, it turns out. The job that we are paid to do doesn't even surface as a blip in your profile of the performance of your system. Everything else costs way more.
Ìý
We started looking at this. We realized that we were spending orders of magnitude more time trying to figure out which thread to do work on than the work was taking to do on the thread. We said, well, let's not do that. Let's do it all on one thread. We changed our architecture, and we came up with our own version. This is a very loose approximation, but we came up with some technology that we later open-sourced called the disruptor, which allowed us to close to the limits of the hardware, exchange information between threads.
Ìý
That was one route to optimization. The other one is this architecture, which was still in the nicest system that I've worked on, the nicest architecture that I've worked on, what it ended up being. The terminology that we'd use now is that we built this very, very dry performance service mesh, which completely separated the accidental complexity of the system from the essential complexity.
Ìý
Then we'd got these little nodes, these little bubbles that represented the services in our system that were stateful and single-threaded. They were beautiful to work on because they were incredibly easy to test, beautiful little pure bubbles of domain-driven design logic. They were the simulations of the problem that we were solving.
Ìý
Then external to that, we did these voltra high-performance asynchronous messaging, resilience, clustering, persistence, everything else. When I left, we were processing about 1.3 times the daily data volume of Twitter. That was, at its heart, at the point where all these points came together. It was running on two servers, 1.3 times of the daily data volume of Twitter globally on two servers.
Ìý
Mike: On two servers, single-threaded where it counts. This led you to coin the term mechanical sympathy. Can you explain a little bit what you mean by mechanical sympathy? Because when I heard this, I thought it was fantastic. We blipped it in the radar. Everybody started talking about mechanical sympathy. What is that?
Ìý
Dave: Martin is an F1 fanatic and has been for a long time. In the 1970s, one of the greatest race drivers-- when Formula 1 was even scarier than it is now, when it was killing people on a regular basis, one of the great racing drivers was a guy called Jackie Stewart. He was renowned for being able to push a car to absolute limits in all kinds of atrocious conditions and still win races.
Ìý
In an interview one time, somebody asked him, "Did you have to be an engineer to be a racing driver?" He said, "No, but you had to have mechanical sympathy with the car." That stuck in Martin's mind. We were writing code that was at the limit. I talked about one millisecond response time just to paint a picture of this.
Ìý
A network packet coming into the edge of our system did two network hops to get to the exchange. It was processed in the matching engine, then two network hops to get out. The round trick time for that on average, by the time that we got what we wanted was 80 microseconds, including the network costs. The network cost 12 microseconds per hop. That's from the wire crossing the boundary so that you could start processing it in Java. We measured that. We knew that at the time.
Ìý
We came up with this idea of mechanical sympathy, which was understanding enough about the hardware so that you could take advantage of it and use it to the advantage. Modern hardware even in those days-- This is a long time ago, but modern hardware is miraculous really in terms of its capacity and its ability to process it. We, software people, tend to be very wasteful in the way that we take advantage of it.
Ìý
We started looking at these things, and there's all sorts of things that we worry about that we often ignore when we're not focused on this real-hyper performance. We got the business driving. We needed to solve this problem, and we were looking at it. We got a reason to do it. Again, this is more broadly applicable, but we started looking at this and things like disc storage. In those days, it was still physical hard discs, not SSDs.
Ìý
Everybody thinks of disc storage as being random access. It's not in time. A rotating rust disc is spinning round, and if you choose the wrong time to try and write to a sector that's on the other side of the disc, you've got to wait for it to get there. That has an enormous impact on the performance. At that time, discs were being optimized to play movies, and so they were really, really great, really efficient serial devices.
Ìý
If you treated them as a block organized serial device, you could increase the rate, the throughput of storing things to disk by two to three orders of magnitude from just treating them as a random access device. Cache lines in memory, the way in which your processor pulls in and populates the cache in the processor is important. Writing software that's cache-friendly is the biggest impact to high performance because a cache MIST is many, many orders of magnitude change in the efficiency of the software.
Ìý
Thinking along those times and understanding enough about the nature of the hardware to start writing our software so that it went with the grain of the hardware rather than went against it and took advantage of the capacity of the hardware was really the secret to our stuff. The single threaded thing is one of those outcomes. I was having a debate on social media this week with somebody. We were talking about imperative versus declarative languages. He was saying the big advantage of declarative language is that you could auto parallelize. I say, "Yes, but that will go slower," because the costs of parallelization, however you organize them, are significant.
Ìý
Just two threads exchanging information on a counter is in the order-- Overdoing work on a single thread is about 300 times slower. The cost of a single synchronization is 300 times the cost of doing it on the one thread. You'd need 300 threads to get the equivalent performance. This goes back to the old school things like Amdahl’s Law that sort of thing, about the parallelism of problems. There's lots and lots of interesting stuff about this, but it's much more complicated than it looks on the surface.
Ìý
We can certainly use these abstractions. Parallelizing problems is great for certain classes of problem. As soon as you need to synchronize those streams of information, again, that comes as enormous cost. It's going to go really slowly now.
Ìý
Mike: Dave, in today's world of cloud everything, is mechanical sympathy still relevant given all of those layers of abstraction between the software most people write and the actual hardware?
Ìý
Dave: I think it matters. It matters at different scales and in different resolutions. One of the things, for example, that it strikes me is, if you look at let's say the example of a cloud-based serverless system. Pre-cloud, the history of complete computing was really about the cost of data storage, because that's what cost a lot. The processing was expensive, but data storage is where the real cost was. The cloud flipped that on its head. Now if you've got a serverless system, storage is like water. It's so cheap that you don't have to worry about that so much, but it's cost per CPU cycle.
Ìý
You want to try-- just for commercial reasons, if you want to make an efficient cloud-based serverless system, you want to optimize that for CPU usage, not for data storage, which means that ideas like normalizing your data are probably stupid. You want to de-normalize your data. You want to shard it all out, have lots of copies. It means that ideas like eventual consistency and stuff that are probably a better thing to think about than trying to coordinate things. Otherwise, you're going to spend lots of money.
Ìý
I've got a friend, Gojko Adzic and David Evans, who the two of them run a business and they've got a very software-- a couple of very popular software products that are cloud-based. Gojko is an expert in the use of cloud system. He laughs outrageously when he-- He regularly tells people how much his monthly bills are on Amazon to host his software, which is-- and it's used by 30 odd million people, this software and it's pennies. It's absolute pennies because he was smart about-- they were both smart about the way in which they designed and took advantage of it.
Ìý
That's using mechanical sympathy. It's using understanding enough about the hosting system and how that works to take advantage of it. The other thing I think is close to my heart is I think we've done the world something of a disservice as software developers because I can't think of another field of human endeavor that is as tolerant of waste as we are. This modern software is so inefficient. Martin Thompson is a world expert on high-performance systems. I rode his coattails to some extent, compared to how good he is at these stuff.
Ìý
I can go into most places, even places that think they're doing high performance and I can probably give you in order of magnitude maybe two improvements on almost any software, because mostly that's just simple dumb stuff, really, really easy things to fix, those sorts of things. Martin can probably get you three or four orders of magnitude if you want to pay his rates. That has a direct commercial and climate change impact.
Ìý
I read an article in New Scientists that's something like-- the article I read-- I'm trying to remember the numbers, but it's something like 12% of carbon emissions come from data centers, and that's an ever increasing number because we're building data centers. 12% of the carbon that we emit as a species is coming from data centers. If I could improve the efficiency of software globally by one order of magnitude, that's 1.2% of global carbon emissions. We don't think about those costs to software, but they're real.
Ìý
I think the ideas of worrying about the efficiency of our systems is worthwhile. It has commercial impact to the organizations, the employers. You can do more with less hardware. It has an environmental impact in terms of the carbon footprints of software which is a pretty big deal. It makes the problem-- it makes it more interesting. It drives this in roots. Particularly if you're using tools like continuous delivery to do this kind of thing, that drives you in directions of making it easier to change these things.
Ìý
One of the really surprising side effects of all of this-- I'm aware that I've probably made this sound really complicated, but one of the surprising side effects of all of this is that actually the software that you end up with is easier and nicer to work with, not harder. If you want high-performance software, you want it to be simple by definition. High-performance software is doing the most work with the least number of instructions so it's going to be simpler. If you are making your software more abstruse in order to make it high-performance, you're doing it wrong in my book.
Ìý
Neal: Yes. In fact, Amazon now has a head of sustainability, so they're trying to encourage developers to think more about how they use cloud storage and use it more intelligently than just in a shotgun approach. Hopefully, we'll see more of that direction as time goes by.
Ìý
Dave: Yes, absolutely. I'm aware that I sound like I'm anti-abstraction and anti-raising the bar. It's not really about that, but we need to be smart. We can't just say, ah, we'll just throw hardware at it all the time. That's not the solution.
Ìý
Neal: Before we run out of time, I would definitely want to talk about the Continuous Delivery YouTube channel and your latest book. Let's talk about those two things. Your Continuous Delivery YouTube channel, what caused that?
Ìý
Dave: That was a complete accident. At the start of the pandemic, I was in reasonably high demand as a consultant. I was flying all around the world, my carbon footprint was pretty bad, advising companies on how to improve their software development practices and engineering approach and so on. The pandemic hits, all of my travel stopped and I thought, well, what am I going to do? I thought, well, it'd be fun to just get some of this stuff out there.
Ìý
I got some stuff that I think can help people. I started off. I just started using a kit that I've got, the old cameras that I'd gotten, old microphones that I'd gotten. If you look at some of my very early videos, the production quality's-- let me say that the production quality of my YouTube channels moved on somewhat. [chuckles] I started doing it, and I'm very proud of myself. My son and my wife helped me with the channel in different aspects.
Ìý
My son does a lot of the social media stuff and marketing, thumbnails and things and searches engine optimization. I started off and I didn't have any great ambitions for it. It was really initially something to do while we were waiting for the pandemic to ease off and we could start going back to normal. That took longer and longer inevitably as we all found out. I started producing a video every week and releasing it every week. We did pretty well. In the first year, we gained about 2,000 subscribers and we were delighted. We were really over the moon. We didn't anticipate anything like that really.
Ìý
My wife, my son, and I were having bets on whether we'd hit 2,000 subscribers by the end of the year, that first year, 2020. Just around about Christmas time, we released the Cyberpunk video. This video game was released Cyberpunk 77, whatever it was called, I've forgotten. I did a video on that, trying to mine some software development learnings from what I could publicly find out about the problems of that. That went, at least for my videos, that went viral. We got half a million views in a few weeks, which was just entirely crazy.
Ìý
That bumped us up to another level. We got to about 20,000 subscribers. Then it's been growing ever since. I got my silver plaque a few weeks ago of a hundred thousand subscribers. In the two years since we started, so far we've released at least one video every week. The channel covers all sorts of topics. It's called Continuous Delivery, if anybody's interested, if they go to YouTube. Search Continuous Delivery, you'll find our channel. It's broadened out. I talk about all sorts of aspects of software development, software engineering, continues delivery, agile thinking, those sorts of things.
Ìý
I tend not to do this is how to write this shell script, or this is how to do this in a language kind of videos. I tend to talk about broader concepts. I talk about things like how to get a good job, the difficulties and complexities of using microservices, how to do test-driven development, how to build a deployment pipeline, all sorts of things. It's got its own momentum now, and it built up a community which is fantastic. Occasionally, you get some crazy people saying some odd things as you do on social media. Mostly the feedback that I get is that, "I just tried what you said. We tried it in our team and it works. Thank you." That's a brilliant thing. That's such a fantastic feeling. I feel I've had a hand in getting some people to try out continuous integration in test-driven development who wouldn't otherwise. That makes me feel quite proud. [laughs]
Ìý
Mike: That's fantastic. Let's talk about the book, so Modern Software Engineering. I know we're close to time here, but let's talk about that. What misconceptions do most of us have about software engineering? The crux of the book is the engineering word.
Ìý
Dave: Absolutely. I agonized a little bit over the title because I knew that was the title that I wanted, but I knew that it would be misread. I think that the biggest misconception that people have in software, most people I think when you say engineering, they think of some big bureaucratic rigid process. I think that's just deeply wrong. I just think that's completely misunderstanding what engineering is. I think it's confusing.
Ìý
One of the commonest forms of engineering to be fair in other disciplines which is production engineering. If you are building physical devices, the problem of productionizing that thing is the hardest part. The design is interesting and creative and challenging, but the really difficult part is to be able to mass produce it at a price, getting all the materials at the right-- all the logistics. That's a problem that we in software never have, because production for us is free. We can reproduce the stream of bites that represents our system however complex, essentially for free.
Ìý
We don't have a production problem. Our problem is different to that. I think that we've taken a lot of missteps by virtue of not seeing that there's a distinction. Our problem it seems to me is much more closely aligned with design engineering. Imagine designing something unique, something that's never existed before, because the cost of production is free, so why would you not be doing something unique? You're building the Curiosity Rover to land on Mars, or you're Elon Musk building the Starship in Texas. Then that's a different form of engineering. With that kind of engineering, what you do is that you try to organize things to go really fast, so that you could try stuff out and figure out what works and what doesn't.
Ìý
The other aspect of engineering that I think that we get wrong is we, technologists, are technical people. We probably did maths at school, that kind of stuff. I think we often confuse technical with maths, and they're not the same thing. I love maths. We use maths. There's a deep degree to which software is hugely informed by mathematical thinking, not the same thing as engineering though. Engineering has a pragmatic aspect to it.
Ìý
If you are designing the first of a new type of airplane or something like that, yes, you'll do all of the maths. Yes, you'll have models and you'll do the computer simulations, but then you're going to go and test flight. You're not going to just put passengers in it on the first flight. You're going to try stuff out. You're going to break things. You're going to bend the wings until they break and see what happens. [chuckles] You're going to do all of this kind of stuff.
Ìý
There's a difference to engineering that is not about thinking that you know the answers. This is one of the really deep things I think that we learn from science. I think that the huge difference in all of the-- For me, the thread that pulls all of the things that we've talked about this afternoon together is that I think that the way that human beings solve complex problems is by starting out by assuming that we do don't know the answer, not by pretending that we know all of the answers.
Ìý
You start off assuming that we don't know the answer, and whatever we guess first is probably wrong. We want to be defensive. We want to organize our work in a way that, when we find out where we're wrong, we can fix it. It's like Neal and Rebecca's book about evolutionary architecture. That's deeply part of what I'm talking about too. We need to organize things in a way so that we can work iteratively so we can get fast feedback on what we are doing. We can observe what we're doing and we see, "Oh, we're wrong there, let's fix that." Also incrementally, so we can build on the learnings that we've got piece by piece by piece until we end up with something complex. I think that's the only way that human beings ever build anything complicated.
Ìý
To ignore that, to not put that at the heart of our discipline is a huge mistake. That's the first half of the book. The second half is about managing complexity. My thesis in terms of trying to frame modern software engineering is I think that we should become world-class experts at learning. There's things like iteration feedback, experimentalism, empiricism, thought incrementalism, those sorts of ideas that are at the heart of that.
Ìý
Then I think we need to be world-class at managing complexity. We need to compartmentalize the systems so that they are defensible. We can make a change over here without worrying-- forcing change over there. Make a change over here and get this wrong, and then change it later on without making work over there, all of those sorts of things. That's about ideas like modularity, cohesion, separation of concerns, coupling, abstraction, those sorts of things. That's kind of the second part of the book. Then there's a bit that ties it together and says how you use these as tools.
Ìý
They're all deeply intertwined but that's really the thesis of the book is that it's my belief. My argument is that, if you were to organize your work in the way that I described in the book, whatever it is that you're doing, whatever technology you're working on, whatever problem you're trying to solve, you'll do a better job and you'll do it faster if you do this, than if you don't.
Ìý
If I was correct and my book is describing something that was genuinely an engineering discipline, we'd expect that as a result. That would be, if it didn't do that, it wouldn't count as engineering. That's what I was trying to get at. That's what the book's meant to try and do.
Ìý
Neal: Well, in many ways, it sounds like the book is a culmination of all these myriad experiences you've had as you've been winding your way through the software development world.
Ìý
Dave: Yes, I think it is. I don't know whether it's the last book that I'll write but, at the moment, I don't know what else I'm going to write about.
Ìý
Neal: Well, there's still the autobiography that you could work on after this.
[laughter]
Ìý
Neal: Well, it's absolutely a pleasure to talk to you, Dave, and always a fascinating conversation. We could talk about the other things that you've done for another hour, but we need to wrap up today. Thanks again for joining us and have a great rest of the day.
Ìý
Mike: Thanks, Dave.
Ìý
Dave: It's a pleasure. Thank you very much for asking me.