Brief summary
It's widely accepted that, in most cases at least, software systems should be modular, consisting of separate, discrete services. But what about the size of those services? How big or small should they be? This is where the question of service granularity comes in: too small and your system will become needlessly complicated; too big and you lose all the benefits of modularity you were seeking in the first place.
Ìý
In this episode of the ºÚÁÏÃÅ Technology Podcast, host Ken Mugrage is joined by Neal Ford and Mark Richards — authors of multiple books on software architecture — to discuss service granularity. They explain why it matters and how software architects can go about getting it right, through the lens of granularity integrators and disintegrators.Ìý
Ìý
- Learn more about Neal and Mark's 2021 book Software Architecture: The Hard PartsÌý
- Find out more about Neal and Mark's second edition of , set to be released in early 2025
Episode transcript
Ìý
Ken Mugrage: Hello, and welcome to another episode of the ºÚÁÏÃÅ Technology Podcast. My name is Ken Mugrage. I'm one of your regular hosts. I'm joined today by host/guest, Neal Ford.
Ìý
Neal Ford: Hello, everyone. Welcome back to the podcast. I'm over here in the luxurious guest side of the studio today.
Ìý
Ken: We also have a co-author of several books and a longtime companion of Neal's, Mark Richards.
Ìý
Mark Richards: Hey, everyone. Hello, Ken. Hey again, Neal! [Laughter]
Ìý
Neal: He says that because we have six Zoom calls a week and we're writing books together.
Ìý
Mark: Very true.
Ìý
Ken: Yes, writing is quite the collaborative effort. People think it's alone on a beach somewhere and nothing can be further from the truth. Today we're going to talk about the right granularity for software services. You know, the old, "How big should my house be?" I guess just opening thoughts from each of you if you don't mind.
Ìý
Mark: I'll tell you one good way to open this is a quote that I made a long time ago that seems to have proliferated everywhere. That quote is, "Embrace modularity but beware of granularity." Somehow that quote just took off. [Chuckles]
Ìý
Ken: What's the difference? Those sound like the same thing to me.
Ìý
Mark: You know what? Ken, a lot of times architects and developers do interchange these two terms, but they are very much different. Modularity is about the breaking apart of systems into different parts. Granularity is about the size of those parts. The reason for this quote about embracing modularity is in today's world, modularity gives us really good agility, gives us better scalability, a lot of the operational characteristics that we need in most systems. That, at least in my opinion, is a good thing, a very positive thing.
Ìý
However, where we get into trouble is exactly the topic of this podcast, which is granularity, the size of those parts. Making those too small, all of a sudden we have too much communication and it looks like a big ball of distributed mud. Make those pieces too large and we lose that modularity and start losing all of the benefits of actually breaking our systems apart. Hence the quote.
Ìý
Ken: Neal, is there a right answer? Can I ask you how big should something be and you have just an answer or is it the old consultants, it depends?
Ìý
Neal: Well, it's 42, but I just won't tell you what the unit of measure is. Of course, it depends. This is the perpetual problem in software. This is a problem that Mark and I have identified a lot, is that teams get too aggressive about, "Okay, we're doing microservices. Well, the word micro is in there, so that means we should go really micro." Then you end up with this accordion effect where you build a whole bunch of services that are way too small. Then it's really slow and the coordination is a headache and transactions are hard, and so you bundle them back together, but then you face other problems because of exactly what Mark was talking about.
Ìý
Now you've got this wad of things that are bundled together in an ad hoc way. What we really need is an approach that helps you arrive at the best level of granularity for your services. This is the thing that we really focused on in the first part of our Architecture: The Hard Parts book, which is shockingly two books ago now, since we just finished the Head First Software Architecture book. It is one of the most difficult parts of software architecture because I don't know of any architect who's so clever that they can just look at a complex problem and the exact right granularity of services just fall out of their head into a drawing tool or into containers or something like that.
Ìý
What you need is a way to create a candidate design and then iterate on that design before taking the expensive and time-consuming task of actually implementing it and realizing you've gone too small or too big. That's the thing that Mark and I talk about a lot is this idea of iterative design in architecture. Architecture's not fixed. That's really the essence of agility in architecture is the ability to iterate and get fast feedback. We're talking about iterating at the design stage, really at the lines and boxes phase of architecture. Even at that level, you can iterate, and so what we want to talk a little bit about today is we've identified some tools that you can use for iteration.
Ìý
That's what you need to be able to iterate, you need tools to allow you to do this iteration process. Actually, leading up to the Architecture: The Hard Parts book, Mark had identified these fantastic forces that I'll let him describe that are great general-purpose tools for iterating on your architectural granularity.
Ìý
Ken: Before I ask that, though, if you don't mind, looking at that book, notice that you two were joined by two other authors who are more known for their data expertise. What are the levers that you're thinking about here? We'll get into Mark's thing, of course, but are you thinking about just the architecture itself? Are you thinking about team scale? What are the levers involved here?
Ìý
Neal: That's a great question, and you are astute to observe that, in fact, Pramod and Zhamak are part of our book because as you'll see as we go through these tools, data plays a big part of this. This is exactly why modularity is really mostly about architecture and how those things fit together, but services are about additional forces, like team topologies, like data. In fact, this is parenthetical to this discussion, but it's a good time to bring it up. Mark and I are currently in the process of doing a second edition of our Fundamentals of Software Architecture book.
Ìý
One of the things that happens in that book, in the second part, is a catalog of all these architecture styles, like layered architecture and microkernel and microservices, et cetera. We were inconsistent in the first edition, and we're making it consistent in this edition that every single architecture style has a section about data impacts for that architecture style and team topology impacts for that style. Exactly the two things that you ask about, which just convinces us even more that those are the important forces that are in the air right now. That's a great distinction between architectural modularity versus granularity because granularity needs to incorporate things like data dependencies, and we'll see as we go through our tools.
Ìý
Ken: Mark, what were those early discoveries there?
Ìý
Mark: What I want to do is take a little bit of a step back because Neal said a couple of things that really predate where these drivers and tools came from. Back in 2016, seems like a long time ago, I coined an anti-pattern called the grains of sand anti-pattern. I coined that based on my personal experience being in microservices and seeing team after team after team just starting to take every function in that system and make it a service or make it a Lambda or a serverless function or a cloud function. As a matter of fact, there's a lot of lessons learned based on that grains of sand anti-pattern.
Ìý
Case in point, late in 2023, Amazon had realized this same kind of problem with granularity in their video monitoring software. Of course, all the headlines read, "Amazon Going Back to the Monolith. They're abandoning microservices. Yay for the monolith." Well, the reality of it was they did not go back to a monolith. They basically readjusted their granularity. A lot of companies I work with, I see this rubber banding occur where developers fall into this grains of sand anti-pattern, realize this is a big mess, and start putting things back together and realize, "Wait, this is too much of a big unit to test," or deployment's is too risky, so let's break it apart again.
Ìý
It's really hard to get the right level of granularity. Hence to your question, Ken, [laughter] I just figured it'd be good to talk a little bit about that history. Basically, what Neal and I identified were these forces that are in play when we talk about how big a service should be. Now, back in the day when microservices was first introduced and companies were kicking the tire and another architect, Sam Newman, wrote the book, Building Microservices. In there, he offered some really sound advice. He said this, "Start out more coarse grain and move to more fine-grained as you learn more about the service."
Ìý
What wonderful advice, except, unfortunately, Sam stopped right there and really didn't tell us, what do you mean about learning about the service? That's what we're going to talk about in this podcast. The two drivers which we found were granularity disintegration drivers. These are forces at play, different aspects and forces that would cause us to break a service apart making it more fine-grained, more single purpose. However, those forces have to be balanced with the trade-offs of granularity integrators. These are forces that say, "Well, not so fast. Maybe you should make this service bigger or not break it apart."
Ìý
Getting the right level of granularity is really understanding the trade-offs and the balance between each of these granularity disintegration drivers, which we're going to talk about, and also the granularity integration drivers. That's really the framework or the tools that we're offering. Neal is very fond of saying, we can't tell you, neither of us can tell you how big your service should be. What we can do is give you the tools in a toolbox so that you can figure out how big that service should be.
Ìý
Ken: These tools, are they pretty clear in their outcome? If I do this and I apply this measurement, is everyone in the room going to agree or is there still a lot of, "Eh"?
Ìý
Neal: Nobody ever agrees on anything in software architecture, but we also make this distinction between fairly generic tools, which are the ones we talk about, but also hyper-specific tools that will apply to a particular organization or problem domain or team topology. Let's give you a few examples of these to give you a flavor of what we're talking about and you'll see how agreeable they may be to the crowd of architects in the room. Rumor has it that a group of architects is referred to as an argument of architects, but I've always thought it should be called a cohesion of architects, but anyway.
Ìý
The first of the disintegrators that we talk about is the most obvious one which is service functionality. That's what leads you to, "Oh, I've got this big thing. Let's break it down into the smallest pieces," and that's where the micro in microservices comes from. You may take all the individual pieces you can think of and break those down into small pieces and that's largely a refactoring exercise as long as you worry about the data dependencies that may exist there.
Ìý
Mark: Now, it's interesting, Neal, that Ken had asked the question about can you agree upon them? If I talk about one of these drivers, are these definitive or are we still going to have an argument of architects on those? It's interesting, Ken, because it turns out the disintegration driver that Neal's really talking about, service functionality, which has to do with cohesion, is in fact subjective because we ask what really is single purpose? Oh, Neal, let's talk about our famous example.
Ìý
We have three ways in which we can notify a customer in our system. We can notify them through SMS texting, we can notify them through email, or even send them postal letters through the mail. Assume we have a notification service. Well, what's single purpose? The act of notifying a customer about something or is single purpose the act of notifying a customer through email?
Ìý
Neal: You could go even finer. Is it every kind of message for each kind of communication medium? I think you could argue over drinks robustly for any one of those philosophically as to be single purpose. That's why we've decided you got to get away from philosophy and you need tools. That's where the seed of disagreement often lies.
Ìý
Mark: Neal, I think this would be fun to alternate and talk about the different ones because, Ken, coming back to your important question, that is the only disintegration driver that would have variableness. In other words, if I say, "No, that notification should be just a single service. It doesn't matter whether it's email, SMS, texting, or postal letter, it should be in a single service." You might say, Ken, "Yes, no, that needs to be three services." Neal says, "No, that should be two, electronic notification and letter."
Ìý
We could fight all day, and we will rarely, if ever, be in agreement about this. The other disintegration driver that we have is code volatility. In other words, linked to the concepts of volatility-based decomposition. When we've got a single service, we're wondering, should I break it apart? One of the things we could actually learn about the service and measure is the rate of change of different parts of the source code or functionality in that service. We can measure this over time just with very simple Git commands.
Ìý
If we find that one part of the service is changing a lot and others aren't, we could extract that functionality out of that larger service into a smaller one, therefore breaking it apart, reducing the testing scope, reducing the deployment scope, reducing deployment risk. The fact that that one piece, now that it's extracted, doesn't impact any of the other parts of that service during deployment or even changing it. That is one that we can actually measure. It's really hard to argue with demonstration, the fact that this part of the service changes all the time. [chuckles] That's another good disintegration driver.
Ìý
Neal: There's an additional add-on to that one that I usually bring up at this point, which is volatility is also a great indicator of-- Mark and I talk a lot about trade-offs as being the essence of software architecture. That's our first law of software architecture in our first book is that everything's a trade-off. We often see people who miss the trade-offs of things that they're trying to do. Code reuse is a great example of one of those because there are really two things you need to consider with code reuse. The first one, everyone gets right, and everybody misses the second one.
Ìý
The first one is abstraction. "Oh, look, I can abstract this code from multiple sources and reuse it because it's a useful abstraction." The second part is low volatility because if you reuse something that has high volatility, it drives churn in your architecture because every time that thing changes, everything that's reusing it has to stop and coordinate around that change. Even if it's not to make a change themselves, they have to check to see and test to see if they've changed. Highly volatile things make terrible reuse candidates in software architecture.
Ìý
The real essence of effective reuse is both good abstraction and low volatility. That's a good guideline for the things you should reuse. In fact, this goes back to reifying the common advice that you hear. What are the things that we do effectively reuse the best in software architecture? It's mostly plumbing: frameworks, libraries, things that have formal release cycles that we can track. It's the other reason why domain concepts make terrible reuse candidates because domain concepts by their definition are the most volatile things. That's the thing we're writing the software about is the domain and it's going to change the most rapidly.
Ìý
If you do want to reuse part of your domain then volatility-based decomposition is a great way to think about it because it has less damaging impact on the other parts of your architecture for coupling to something that's too volatile. That's just a side benefit of this particular disintegrator is an insight into volatility. The third of our disintegrators, we called it scalability and throughput, but it's really that whole family of operational architecture characteristics. Very often, this is the thing that distinguishes where one part of my system needs to be more scalable or more elastic than another part of my system.
Ìý
That's a clear place where, going back to the example Mark gave before of notifications, well, SMS text may have 200,000 a minute, whereas postal letter has once a day. Clearly those have very different scalability needs, so disintegrating them around scalability or some other operational characteristic, and those are particularly important because that's really the benefit of microservices, that ability for each service to have its own set of characteristics around those operational capabilities within architecture.
Ìý
Ken: For lack of a better term, is there a minimum viable product for that? I've seen a system where they had that and they had a part they had to scale, so they broke it out and then they deployed that one in multi zones around the world and did a whole bunch of other things to it so it could scale. The communication ended up causing them no end of nightmares. If they're urged by that to disintegrate, how far should they go?
Ìý
Neal: Ah, just stay tuned. One of our integrators is coming right up, which is the side effect of over-applying a disintegrator. You apply an integrator to solve exactly the problem that you were talking about, sir.
Ìý
Ken: Okay. We'll stay tuned for that one.
Ìý
Mark: Yes. As a matter of fact, very astute of you, Ken, for observing that, because that's where a lot of the rubber banding does in fact occur. However, if we apply this, I hate the word framework, but these kind of techniques of disintegrators, integrators, we can now analyze those trade-offs, so yes, so as Neal said, stay tuned, but Neal's right. Operational characteristics are a great way to, first of all, measure and learn about a particular service. Again, something I can show metrics for, a graph, which is really hard to argue about, and that next one is fault tolerance.
Ìý
In other words, you know that frustrating situation where you've got a service and one part of it just keeps failing? Of course, we know in the monolithic world, if that one part, let's say, spins up too many threads, gets maybe a out-of-memory fault, well, all that functionality comes down. One of the drivers of a matter of fact, which is fault tolerance of moving to a level of modularity and a distributed architecture. Well, we can apply that same concept at a service level as well. If we have a particular service that has a faulty piece to it, we can measure that. We can identify the failure rates of a certain piece of function which brings down all those others.
Ìý
Ah, let's go back to that notification service example. Let's say that email continually has some out-of-memory faults on its SMTP connection pooling, and for some reason, it won't release the connections, keeps getting more of them, and ends up crashing. If we have a single service, that's going to bring down SMS texting and also postal letter processing. This also has an element to it of mean time to recovery, MTTR. Sure, we could just bring up another instance of the service, but if it has all this functionality, it might take two to three seconds to spin that back up.
Ìý
Whereas, if we remove that faulty piece from the service, extract it, break that service apart to be able to isolate our not-so-reliable functions inside that service, now we've isolated those errors, such as email, for example. When it comes down, it's not impacting SMS texting or even postal letter. As a matter of fact, it's a lot faster to bring up a new instance, mean time to recovery on that. Fault tolerance is yet another operational characteristic that we could learn about, measure. Ken, that's really hard to argue about.
Ìý
[Laughter]
Ìý
Neal: If podcasts are about nothing else, they're about digression, so I feel compelled to digress here for just a moment. One of the common problems in architectures like this is exactly what Mark is talking about, which is fault tolerance. In fact, the entire family that has come to be known as the eight fallacies of distributed computing. Peter Deutsch coined these back in the, I think, late '80s at Sun Microsystems. All of these common fallacies that developers stumble into as they start building distributed systems like bandwidth is infinite and the transport cost is zero and all those other pesky sort of real-world problems.
Ìý
We have always recommended that you learn about the eight fallacies in one of two ways. Either go to the Wikipedia page and learn about them and apply them or painfully stumble across them one at a time in your professional career. I think the former is a better way to learn them, but you'll learn them either way. [chuckling] Those of us who chose the latter tend to advise people to try the former. This is all just prelude to Mark and I have slowly been adding to that list as we've observed things in the microservices world.
Ìý
We've added three so far that we've semi-published. The first one of those is that versioning is easy because it's not. It's a swamp because, "Oh, we'll just version that." "Okay, well, how many versions are you going to support? How long are you going to keep those versions? What is your deprecation strategy?" and all these questions that come up. That's the first of our new ones. What's the second one, Mark?
Ìý
Mark: The second one is one of my favorites. Compensating updates always work. When we talk about transactions, especially atomic transactions and distributed architectures, well, if we have an error along the chain of services being invoked, I've already committed data in prior services, in prior databases. The problem is I have to just issue a compensating update. A fallacy is something we believe to be true, but it's not. We always assume those compensating updates, to be able to reverse a particular insert or update, will work. What if it doesn't? I love that one.
Ìý
Neal: Just like you never see anyone on a cooking show burn a dish, when people are describing some sort of architecture pattern, they never talk about the fact that, "Oh, and the compensating update may fail, too." Then what do you do? That's a whole other can of worms. The last one of the ones we've added is that observability is optional because it's really not in modern-- because it's going to break, so you've got to know how it broke. That's a little sidebar, too. The last of our disintegrators is another pretty obvious one, which is security, or security concerns, or privacy concerns.
Ìý
Obviously, that's a motivation for breaking something into more granular pieces just to be able to protect something like that, either financial or some sort of privacy-related thing. That's a pretty obvious one.
Ìý
Mark: I have to interject. That doesn't mean the rest of you get to not worry about security! [Laughter.]
Ìý
Ken: That's right.
Ìý
Neal: That's exactly right. I want to re-stress something that I mentioned earlier before we go to the integrators, that this is not meant to be an exhaustive list. These are just ones that we identified. We encourage architects to fill up their toolbox in their organization for organization-specific ones and start thinking about them and labeling them in this way. As Mark said, these are not really patterns as much as heuristics for design, but Iit's useful to name these so that you have them at your disposal when you start thinking about things. Let's talk about the three integrators, which are the forces that encourage you to bundle things back together.
Ìý
Mark: I want to digress just a little bit, since you're right, Neal, podcasts are a way of just being able to digress, and talk about that fallacy 11 that we coined, observability is optional. What we saw with actually four of these disintegration drivers were various measurements that we can use to be able to learn more about a service. The more I reflect on that, Neal, the more I realize the importance of that fallacy is because in too many cases I have been in, observability is an afterthought. Once everything's running, we're in production, it's like, "Oh, you know, we should start monitoring this stuff."
Ìý
Well, observability will cause changes in the service, the user interface, maybe a particular system or product. Observability really is about the ability of a particular service to expose or export its information, its telemetry. It doesn't always have to be about response time. It could be error rates. It could be the amount of messages I just processed per hour. It could be a lot of functional-related information. I think that's an important one not to just gloss over because as an afterthought, now we're starting to retrofit something that is critical in the architecture, especially distributed architectures these days.
Ìý
Anyways, just wanted to digress on that. Speaking of integrators, these that we just talked about are some of the many ways we can learn about a service to say, "Yes, we should probably break that apart. We should make these services smaller," but the first integration driver happens to be transactions because when we break apart a particular service, if we are breaking apart that data as well, and we still need a single unit of work, we are now in a distributed transaction world. Think about the case of registering a customer. We have to create a profile for a customer. We choose to separate out the customer profile create service for registering a customer with a password service.
Ìý
Maybe it's due to Neal's prior example of a disintegration driver on access restriction for security. We don't want everybody getting to a password service, so this is probably a good driver to break that apart. However, now I have a distributed transaction. I can't do a simple commit or rollback upon errors. What we see is now our first trade-off because that trade-off is data integrity and data consistency. Any of the disintegration drivers we just saw, the trade-off of those is potentially going to be data integrity, data consistency, issues with fallacy number 10, which is compensating updates always work.
Ìý
Now all of a sudden we've got inconsistent data everywhere. The first integration driver is if a transaction, and when I say transaction, a database transaction, single unit of work, commit rollback is required by the business, necessarily we have to join those services back together. That's the only way to get that single unit of work transaction.
Ìý
Neal: That's a great example of the iterative nature of design I was talking about. For example, the service functionality disintegrator, you break things down by behavior, but then you realize, "Oh, no. I'm going to have to do a transaction to get data integrity." That encourages you to integrate them back together and maybe apply a different disintegrator to see, "Well, can I slice it in a different way?" That's the iterative nature of this kind of design. The second one of our integrators is something that Ken brought up earlier at the very beginning of this discussion, which is about data dependencies.
Ìý
In our Hard Parts book, we broadly talk about the static coupling in software architecture, which is how things are wired together but then also the communication in those architectures, including things like transactions. Mark was just talking about the communication side. Now we're talking about the static wiring side, which includes things like data dependencies. Many teams have gone on a wild west adventure of breaking apart their services only to realize that there's no way the data is going to allow them to do that because of joins and views and store procedures and referential integrity and all those kind of things.
Ìý
That is actually very often an integrator of, I just cannot break apart this data so much. In fact, an entire fascinating subtopic in the microservices world is this really strong distinction in that world of who owns this data and who is transactional with this data versus who can see this data and how do they see that. A lot of caching, in-memory caching and data sidecars and distributing data. GraphQL is a commonly used tool for aggregating data and putting it in places so that you have visibility but not updatability for data. Microservices has really brought that distinction into high relief because it's such an important distinction in so many of those architectures. Data dependencies is a very common integrator.
Ìý
Ken: What's the organizational action there? If you're coming up with something and there's all these great reasons to disintegrate it, but the data doesn't support it and so forth, is there an organizational action? Should you be looking at that? Should you be changing the data? Is it a chicken and egg problem? What's next? How do you get across the street?
Ìý
Neal: It's a great question. This is the essence of software architecture because now we've dug into the problem enough where we can start doing actual trade-off analysis. What is the trade-off of actually digging into that 10-year-old database schema and starting to pull it apart? Where will we be at the end of that process? How much value is this going to add versus proactively designing our architecture around the single database and not getting some of the benefits of microservices? One of the common hybrids that Mark and I talk about in our fundamentals book is something that we call a service-based architecture, which is as far toward microservices as we can get, but we're keeping a single database because it's just too complicated to break the thing apart.
Ìý
That's really common, particularly when that shared database is a resource or an asset for other integration points and other systems who are not interested in your level of granularity in your microservices world, or you meet database developers who are also not interested in your crazy architecture philosophies and have been using their same database philosophy for a few decades, thank you very much. Don't need any extra insight from you about how things should be organized. All of those things are institutional factors, but the essence of that is you need to get it down to a point where you can do some sort of objective trade-off analysis.
Ìý
Even generally, not even you, you need to put together your pros and cons and the weights and values of those. The data people need to do that and then boil all those up to an actual decision maker of some kind.
Ìý
Mark: Neal, I think that last part you said is really the key of this. Actually, a rather important question, Ken, that you asked because what we're offering here are trade-offs. We're showing drivers to say, "Oh, I can get better scalability if I break these services apart," but at what cost? That's what we're describing here are those integrators at the cost of data integrity, data consistency, having to tease apart and break apart data. Maybe it's connected by other artifacts in a relational mode, such as store procedures or views or triggers or foreign keys or any of these things.
Ìý
The point is, and this is what Neal subtly made at the end, a lot of times the role of an architect is to measure and analyze these trade-offs but not necessarily make the decision, but to bring these trade-offs to a product partner, somebody on the product team, to be able to understand these particular forces and which is more important, that we are able to better scale or have better data consistency? In a lot of cases, we as architects just simply assume and make that decision when in fact we should be collaborating with the product team to make that decision.
Ìý
Anyways, I just wanted to overemphasize that point, Neal. Ken, would you like to guess what the last integration driver is because you've already said it? [chuckles]
Ìý
Ken: Team topologies?
Ìý
Mark: No, it was what you first talked about when we were talking about breaking apart a service.
Ìý
Ken: Oh, deployment.
Ìý
Mark: Yes, you said, "Wait a minute, but if you break these things apart, they're all going to have to communicate with each other." That is the third one. What Neal talked about in our second driver was really the fact that it's too hard to break apart the data because it's too highly semantically coupled and maybe even highly syntactically coupled. That third driver is, Ken, what you were referring to, which is highly semantically coupled functionality. As we apply these disintegration drivers to start breaking apart a service, and take one service and make it five, well, if any given request needs to stitch all five of those together, we're really not gaining anything.
Ìý
As a matter of fact, we're losing things. We're going to have poor performance due to latency. As a matter of fact, we're going to have less reliability due to the fact that during that chain, an error occurs and I've already committed two pieces of data. What do I tell the user? If I'm retrieving data, well, try it again. See if it works this time. Data consistency errors along that chain. It's really that last one that we're talking about here is really an important lesson in understanding the nature of the processing of those functions. When we break a service apart if it's three independent pieces that have three different APIs, great. As long as they don't need to be called together, we're fine. If they do, boy, this is a huge trade-off.
Ìý
Neal: It shows how easy it is to accidentally create the worst of all worlds because you're not getting any of the benefits from microservices and you still have all the disadvantages of the monolith. You've backed yourself into a bad situation. That's a good example of something that Mark and I preach a lot, this idea of iterative design. That really to me, is the essence of-- some people say that agile architecture means, oh, there's no architecture. You just start iterating on design and you end up with something. You can sort of emerge design. This is a really important lesson that Rebecca Parsons taught me years ago, is that emergent design, yes, that's okay, the domain-driven design or the design community idea of don't start with a lorry, you start with a roller skate and then you build a unicycle and then a bicycle and then a motorcycle and then a car and then a truck and then a lorry.
Ìý
That doesn't work in architecture because so much of architecture is about capabilities. The capabilities of a roller skate in terms of mass and weight bearing, et cetera, are very different than that of a lorry. It does require some planning, but it's not the lack of planning that makes it agile. It's the ability to get fast feedback and iterate that makes it agile. That's why Mark and I have been leaning heavily into over these last few years, building these tools for doing iterative design and architecture because that, I think, gets you to a better design faster than any other way of attacking it.
Ìý
Ken: I don't know if I should add this because it might be a little too much info about me, but 20 years ago when I was doing agile training, I ripped that page out of the book before I handed it out because there is nothing reusable about a bicycle that goes into a car. That is not a step towards building a car.
Ìý
Mark: You know what, Neal? I love your analogy of the roller skate going to the lorry, but something just dawned on me when Ken said at the very start of this podcast, "What's the size of my house?" I thought just because you can build a dog shed in your backyard does not mean you can build a skyscraper or an office building or an apartment building or even a house. The techniques, the capabilities are worlds apart. I think both of these are really good examples of the difference between that emergent design and iterative architecture.
Ìý
Ken: Yes, you're so right. Even the dog shed example, we were given one for our dog and they didn't ask. We have a Great Dane, so even the dog shed didn't fit. Yes. I want to thank Neal and Mark very much. Again, which of your many books, for this particular topic, would be the most useful to our listeners?
Ìý
Neal: This is all Software Architecture: The Hard Parts. All the things that were too hard to put in the fundamentals book, we wrote them in the hard parts. This is basically the first half of the Hard Parts book.
Ìý
Ken: Then what's the upcoming book?
Ìý
Neal: The one that we just released is Head First Software Architecture, which is like doing a graphic novel about software architecture. We're actually working on the second edition of the fundamentals book, which is a great relief because we're getting to write prose and not having to draw pictures and diagrams. Head First is very visual heavy. It's actually a nice change of pace to get back to just pure prose because that's my first love as a writer, as it turns out.
Ìý
Ken: Again, thank you very much for the time. We certainly appreciate it. Have a great day.
Ìý
Neal: Thanks, Ken.
Ìý
Ken: Okay, well, thank you.