Listen on these platforms
Brief summary
Serverless received significant attention when it first emerged in the middle of the 2010s. And although it has now entered the mainstream and is today used in a diverse range of scenarios and architectures, it nevertheless remains a topic that causes considerable confusion and debate:Ìýwhere should we use it? How should we use it? Sometimes, what even is it, exactly?
Ìý
In this episode of the Technology Podcast, Mike Mason and Prem Chandrasekaran are joined by former Thoughtworker Mike Roberts — author of "the canonical book on serverless,"ÌýProgramming AWS Lambda —Ìýto discuss the current state of serverless. They examine the ways that serverless is understood today and explore the impacts and challenges it has for both businesses and software developers.
Ìý
- Read Mike Roberts' book
- Read Mike's long-read on serverless on
Episode transcript
Ìý
[Music]
Ìý
Mike Mason: Hello, and welcome to the ºÚÁÏÃÅ Technology Podcast. My name is Mike Mason; I'm one of your regular hosts on the podcast. I'm joined today by my friend, erstwhile Thoughtworker, Mike Roberts. Mike, hello.
Ìý
Mike Roberts: Hello! Hello, everybody. Hello, Mike.
Ìý
Mike Mason: We're also joined by Prem C, who's one of the podcast hosts. He's quiet right now because he's having some network problems, but he'll be joining us to ask questions as well. Mike used to work for ºÚÁÏÃÅ, and he's gone to do great things in the world. Today, we're going to be talking about serverless. Mike wrote a eight years ago now, I think that's about it.
Ìý
Mike Roberts: Seven.
Ìý
Mike Mason: Seven, give or take. All right. Mike wrote, I'm going to call it the canonical book on serverless, which is Programming AWS Lambda. It's Java-based focus, but I think that's a pretty comprehensive book. Since then, he's also started a serverless-focused consulting firm and does a lot of consulting for clients using AWS. Is that a fair description of who you are and what you're up to, Mike?
Ìý
Mike Roberts: Yes, pretty much. The consulting thing has been going on for about seven years now. It's been part and parcel of that. It's been an interesting history with what we've been working on there.
Ìý
Mike Mason: Today's podcast, we wanted to talk about the state of serverless. Serverless isn't new anymore. It's something that we've talked about previously on the podcast. Having said that, I do think there is still a little bit of confusion around what serverless actually is. Maybe just really briefly, Mike, could you tell us what you mean when you think of serverless, or whether there's particular categories of it that we should talk about?
Ìý
Mike Roberts: Yes, absolutely. One of the things to say straight off is for any listeners out there that might be confused about what serverless is, don't worry. Those of us in the industry are confused about what serverless is half the time anyway because like all of these buzz phrases that we have in our industry — like DevOps and Agile — I know that serverless is getting somewhere when it starts meaning other things, so that's fun! The basic idea of serverless has been around now for about eight years. It's a collection of cloud-based services that mean that you don't have to worry about running server hosts or running server processes or things like, “well, I need to worry about scale†and how many things I can run before it gets too expensive.
Ìý
The whole idea about serverless is it really feels like you're using software-as-a-service. Like you're using Google apps or whatever, but you're using that concept within your application architecture. There's a few ways that breaks down, but that's sort of the idea. The most popular part and the reason that the serverless became a big deal is this idea called functions-as-a-Service or “cloud functions.†This is where you can run your own server-side software — that can be anything, it can be running any kind of task — but you are not worrying about things like Kubernetes clusters or virtual machines or anything.
Ìý
You're just putting your code up there into a service and letting the service provider figure out when it needs to run that, how it needs to run it, how much it needs to scale. All that kind of thing. The biggest provider of Functions-as-a-Service is AWS Lambda. When people think about serverless, they're thinking of AWS Lambda, but it can often mean more than that, and certainly. these days it's starting to become a lot more than that.
Ìý
Mike Mason: Let's just go into that for a minute. What kinds of things more than that? Storage, is that a form of serverless, or is that not?
Ìý
Mike Roberts: Yes. Serverless storage has actually been around since the beginning in a couple of ways. We've had these managed databases-as-a-service for a long time now, and they used to be wrapped up a lot in application frameworks. Things like Google Firebase and PaaS were around six, seven years ago, but even things like DynamoDB and S3 from Amazon, those exhibit all of the factors that you like about serverless, and that you don't have to worry about running a DynamoDB instance. Every developer in your organization can have their own DynamoDB table, and it might not cost very much money, which is those are the classic serverless traits.
Ìý
We're seeing a couple of new things or a couple of things that are really coming to the fore over the last couple of years. That is application hosting that's more specialized is becoming more of a thing. People don't think about this as serverless as much, but things like hosting your next JS application on Vercel or hosting a website on Netlify where you have got an amount of custom behavior going on that's provided by the managed application provider, that is serverless to all intents and purposes. You're not running a Vercel host, you're giving Vercel your code and letting them figure out how to host that for you.
Ìý
It's more than just a web server. There's more going on there, which is why I think about that as being serverless. That all gets wrapped up in a lot of this stuff where there are certain sort of people that have been doing serverless awhile that try not to write much custom server-side code. I'm really rely on a lot of independent custom services to provide the kind of things that we might otherwise used a library for. Using a third-party authentication system, instead of using your own database and library, that kind of thing. Plugging a whole bunch of those together, all integrated normally, through a front-end application, that's one of the cool kids' ways of using serverless for a while. There's a guy that talks and writes a lot about that called Joe Emison, who is the CTO of an insurance company. He's actually got a book coming out on that sort of way of developing serverless applications.Ìý
Ìý
But the other side of where serverless is today, this buzzy, breaking down of what serverless really means, and some of that is the enterprise-ification of serverless if you will. Things that are serverless-ish. A very good example of this is Amazon now have a bunch of services, which they call serverless. Those of us that have been around a while go, "What? What?"
Ìý
We would think about them as just being like auto-scaling services. For example, they've brought out a version of their Elasticsearch, which is called OpenSearch. They now have an OpenSearch Serverless, which came out. They have a serverless graph database called Neptune. The thing about these, some of these services that coming from Amazon are that they cost quite a lot of money even when you're not doing much. Part of the point about serverless is you should be able to have a thousand of these things that are doing very little and don't cost very much.
Ìý
If you're not using a Lambda function, Amazon don't charge you, but we're starting to see a lot of these things that are really just really good auto-scaling that have a floor that's not zero. We're starting to see some of these things get picked up as under the serverless umbrella.
Ìý
Mike Mason: Serverless is being used as a bit of a branding exercise on some of those in the same way that we see semantic diffusion around anything that's popular like Agile, DevOps, microservices…
Ìý
Mike Roberts: Exactly.
Ìý
Mike Mason: …People, including Amazon, are slapping the label serverless on stuff that is just, in your opinion, software-as-a-service with some good scaling characteristics.
Ìý
Mike Roberts: Yes. Then some of these services are very good. Like, they have a Serverless Aurora, which is their SQL Server instant following. A new version of Serverless Aurora came out, and it was missing a few things that some of us were sad about. For certain clients, the auto-scaling capabilities of Serverless Aurora are fantastic. If you're running a production application, it's always getting some traffic but has quite a variable workload, then something like Serverless Aurora can really help you from its auto-scaling point of view, but is it serverless? Eh.
Ìý
Mike Mason: That's actually kind of how you know that something has crossed the chasm into the mainstream.
Ìý
Mike Roberts: Exactly.
Ìý
Mike Mason: One of the questions that we've asked previously on podcasts is, is serverless suitable for my application, or my company, or my situation? Are we getting to the point where it's not a question of whether it's appropriate, it's more like how are you going to incorporate serverless architecture into your systems?
Ìý
Mike Roberts: Yes, absolutely. I've been running this, my little consulting business now for seven years. One of the things that we've always tried to do is be pretty pragmatic about our use of services. What we care about is typically building an AWS native architecture. By that what I mean is we want to use the most appropriate AWS services that we can. Oftentimes with just the kind of work we do, those are serverless services, but if there's a thing where it makes much more sense just to run something in a container, we'll run something in a container.
Ìý
It's always been a little bit like that. The other thing and this is almost a third aspect to where serverless has changed, but it feeds right into what you just said, Mike, is again, on the enterprise-ification of serverless, and actually, this is in a good way, is we're really seeing that Amazon have added a lot to Lambda that's about trying to reach people where they are in their workloads. Now, some of the serverless purists aren't necessarily so happy about this, but, for example, there was a big feature that just came out the Amazon re:Invent in November, which is Amazon's big conference they run every year.
Ìý
The feature is called SnapStart, and what it is, is it got a huge splash this launch, but it is a feature for Java on Lambda to reduce what are called the cold start times in Lambda. A lot of people have said, "Hey, I don't really want to run my Java apps on Lambda because it can take 5 to 10 seconds for them to start up, and that's not great for an API." Amazon now say, "Now, actually, we can solve that for you." I actually just wrote a two-part blog series exploring that. It came out this week, and sure, some people say, "We're having to worry about that kind of thing.
Ìý
In the good old days of serverless functions-as-a-service, we wouldn't have to worry about that kind of stuff. On the other hand, it's people have been saying, "Now we can run workloads that we weren't able to run before on this technology." I think that's a good thing.
Ìý
Mike Mason: Is the implication that the more purist serverless would have been lighter-weight functions-as-a-service, and this new capability to run Java stuff means you can potentially do bad things like bringing your crusty old enterprise app and trying to run that in a "serverless" way?
Ìý
Mike Roberts: Yes, exactly. There's another feature that literally just got announced this week for Lambda. One of the things about Lambda is Amazon manage the runtimes for you. They manage both the operating system and the runtime, and that's always been the case. A really good example of why that's great is for when you get big security problems where Amazon basically just update everyone's Lambdas overnight, and no one cares because they don't have to worry about it. There's been a couple of huge play times where everyone has been running around with their hair on fire apart from those of us using Lambda, and we didn't have any work to do.
Ìý
That's great for most of us, but some companies that have a lot of really deep library usage or using particular parts of the operating system and don't move very fast, they're now at the point of saying, "We can't have you changing the run time underneath us because that could break our code," so Amazon just added a feature this week, which is, you can if you want, lock down the version of the runtime that you're using, which includes the operating system version to now. Basically, you're saying to Amazon, "Hey, don't update this until I'm ready," which is really great for these big enterprise customers.
Ìý
The purists are saying, "This is yet another knob and dial that I'm aware of that I have to think about. Lambda development is already hard enough because it's different enough to how we've been thinking. Amazon, you keep adding on all of these features. You're just making it harder for everyone." The trade-off is that Amazon are making it-- Go ahead, Mike.
Ìý
Mike Mason: The other thing that's been in the news recently is tech debt as a concept — Southwest Airlines had a bit of an outage, the FAA had a bit of an outage. Both of those are being blamed on tech debt, and it's interesting to see tech debt come up as a concept in the mainstream media. Presumably, if you're using Java Serverless and you lock it into today's version of the framework, you're actually baking in technical debt because you're not doing the upgrades as and when they come out. You're continuing to cause yourself a potential future problem.
Ìý
Mike Roberts: Yes, and that's been the case for a while, and for other reasons. If your libraries are not statically linked to the operating system, you're not linked to your libraries at all. Mike, as you said, I wrote a book on Lambda and Java, which, all our code examples use a bunch of libraries. The number of times I'm getting Dependabot alerts from Github because the XML serialization library we use has got yet another security flaw, we've already been having the opportunity for tech debt, but it's just that, yes certainly with, when Amazon start adding these things, these are, there were constraints on the platform that forced people into a good place.
Ìý
Those constraints are being loosened, and Amazon are doing it for good reasons. I actually agree with Amazon on what they're doing here. There's certainly a bunch of people saying, "No, those constraints were good, and now you're going to be setting customers up for a bad place."
Ìý
Prem Chandrasekaran: Mike, I had a question about, AWS clearly seems like, I know we're at pace with serverless, but what about the state of serverless with the other cloud providers? Are they even worth looking at, or do you think AWS is so far ahead that that's not really an option?
Ìý
Mike Roberts: I will preface this with 99% of the work that I do is AWS-focused these days, and so I am not an expert on the other clouds. My gut feel is that Microsoft are moving fairly fast with their functions offering. Google have a functions offering, but Google have other takes on what serverless means that are closer to a container focus world.
Ìý
One of the thing that I think that Amazon is clearly advancing on, and this is something that we could get onto more later if we wanted is one of the powerful aspects of Lambda especially, is the number of other services that it integrates within AWS.
Ìý
The whole point about Lambda really is it's an event-driven system. You can't open a TCP port from a Lambda function, just open it to the world. You have to have things integrate with it and call it, whether that's an API gateway, or a message system, or whatever. One of the things that I do think Amazon are clearly leading on is the number of ways they are integrating into functions-as-a-Service platform. We really saw, at re:Invent in November, that Amazon are trying very hard to push an asynchronous event-driven philosophy. A lot of those services that are involved in that, one of them is Lambda, but there's a lot of things that aren't Lambda as well.
Ìý
They have a thing called Step Functions, which is a workflow orchestrator. They have lots of new aspects to their messaging platform called EventBridge. They're really trying to push serverless event-based systems. Part of me is a little bit worried that this is where we were 20 years ago, but just doing it serverlessly when things like TIBCO and Microsoft have their big proprietary orchestration frameworks. I think that Amazon because they're approaching it with their serverless mindset where you don't have to run any of these things, that Amazon runs them for you, and because they're charged based upon usage, then I'm not so worried.
Ìý
Yes, there's certainly a lock-in aspect there just like we had from the TIBCO days. Like, what happens if you build all of this stuff into a custom workflow system or a custom messaging system? What happens then if it's not quite right running all of that on Amazon?
Ìý
Prem: Is it fair to assume then that there is this risk of lock-in even now, there is no standardization as far as the stream of right ones run anywhere sort of semantics where you write an implementation and then deploy it to any cloud? That we are very far away from that looks like.
Ìý
Mike Roberts: Yes. A lot has been written on this over the last few years about one person's vendor lock-in is another person's going 10 times as fast because they're leaning on the cloud provider's services. Yes, I think it's worth thinking about both of those ends. For example, Lambda code is just code. Lambda itself it's not a framework. You have a tiny little interface that you have to implement. The actual code that you run on a Lambda function, you can run that anywhere. There's not really much vendor lock-in to the actual code that you're running on Lambda.
Ìý
However, if you're using Lambda as part of a bigger architecture that's involving Step Functions, and EventBridge, and API gateway, and you're tying all of these AWS services together, then that can help you produce an architecture extremely fast that Amazon, they're mostly going to look after that. If you need to suddenly move that somewhere else, you can't. That's the decision you have to make is, do you want to rely on Amazon's idea of what architecture is and let them run your architecture, or do you want an architecture that can be run in different places? That's a huge decision that architects have.
Ìý
Mike Mason: Yes, and I think it's something that we've touched upon in other podcasts on multicloud and polycloud and is a very deep decision because, of course, if you're asking the question is Amazon going to go away sometime, then maybe you need to be thinking about that if you're a bank or something mega big. Even then the question is do we want to be able to hot swap between providers and what's the cost of that going to be and all that kind of stuff? We've been talking about the state of serverless itself so far. Can we talk a little bit about the state of people's understanding of serverless? Would you say the overall expertise with this stuff has improved over the last few years? Are there things people still don't understand or typically get wrong?
Ìý
Mike Roberts: Yes. It's, I would say, in some ways, there's a lot more people doing it, which is good. Is the average level of what people are doing with it, going up? Oh, questionable. I think there are a lot of companies doing some really good stuff with it right now. I think quite a few companies have figured out how to manage it, but there are also, I think, as lots and lots more companies come on and use it, there's still conflicting thoughts about how to use this stuff. Some of which is the provider's fault, some of which is still immaturity in the system.
Ìý
You look at the number of books now that exist on serverless, it's huge, right? It's like, take your pick, but it's hard. Obviously, I work a lot with clients that are using this stuff. That's now become really our job. When we started the company we thought we were going to be serverless consultants, and then basically what happened was no one was really doing serverless, so we became AWS consultants. Now what's happened is actually we've become serverless AWS consultants because there's now enough work out there that's serverless that we came back to where we started, but really focused on AWS.
Ìý
I see a lot of companies that are using this stuff, and there's still a lot of confusion about what needs to change and what doesn't. For example, people assume that they're- I'm going to use a phrase that I hate using, but everyone understands what it means- CICD. Everyone thinks that CICD needs to change when you're using serverless. That's absolutely pretty much not the case. Your overall CICD setups are basically going to stay the same, there's not much difference there. Your application architecture, because you don't have these always on the servers, that's actually going to significantly change.
Ìý
I think that people come in and still expect things to be different everywhere. Where they are different, there's still no real solid consensus of how to do that. A really good example. Folks will know that because of the way that Lambda and similar services work, it's recommended that you keep them very lightweight and you don't use lots of heavyweight frameworks. Okay, so how do you build your applications if they're not using heavyweight frameworks? I have my opinions, but there's not a standard framework because that's by definition what you shouldn't be doing.
Ìý
It was really nice when Rails and Spring exist because there's this standard place that you start from, but you pull the frameworks out from people, and it gets a little bit fuzzy as to what to do. Even in-- I'll start that part again. Even in terms of purely just using Lambda from an architecture point of view, there's one question out there which is very simple to ask, which causes all kinds of arguments in the serverless world, which is, you have an application and it needs to do 20 things, Should that be 20 Lambda functions, or should that be 1 Lambda function that does 20 things? If you want to start a religious war among a bunch of Lambda developers, then just ask that question.
Ìý
I actually wrote an article about this last year, and oh my goodness me, the pushback I got. That's a very simple question, should it be one Lambda, or should it be 20? Even that, there's still no consensus about what is best there.
Ìý
Mike Mason: I think that's interesting...
Ìý
Prem: Sorry, Mike. I'm curious to know what your recommendation would be. Would you have one thing doing 20 things, or 20 things doing one thing? What's your recommendation if I were to put you on the spot?
Ìý
Mike Roberts: I'm a consultant, [laughs] so I would say it depends. No, the point of my article that I wrote last year is I think that there is a goldilocks solution to this, which is actually, I would probably have about four Lambda functions is my guess. I'll give you a very specific case right now. I'm working with a client, they are re-platforming a 15-year-old legacy application that's in .Net, it runs on-prem, blah, blah, blah. They're completely rebuilding the system and its TypeScript Lambda. I came in, and they were like they just started, but they're starting to get a lot of Lambda functions.
Ìý
One of the parts of their system is an administration application. It's a public-facing app mostly, but they have an admin app, which changes a lot of configuration types. There's about 80 different things that they need to do from their admin app. Having 80 Lambda functions would be horrible to work with. All of their admin app, I now have going through one Lambda function. The same application, the same microservice, if you will, also has behavior that faces the outside world. There are three Lambda functions that deal with requests that are involved with the external application.
Ìý
Those split down by, is this a public request or is it an authorized request? That kind of thing. Then I have a couple of Lambda functions in there, which are not directly called via an API at all and are used from other sources. For what I'm building with them right now, is they have about 8 different Lambda functions, and it probably satisfies about 150 types of requests. If you talk to a lot of people, they would be outraged about the fact that I've gone either, that I'm not on one extreme. It's interesting that, but again, the point I'm trying to raise there is that there's not really a huge consensus on that, and it's not something that Amazon speak about at all when they're talking about, "Okay, you need to do 150 things, how many Lambda function should you have?"
Ìý
Mike Mason: That's interesting as well because it makes me think of the fact that good taste in software design is actually really important no matter the actual technology that you're using. That the tech stack can change, but code that looks good versus code that looks rubbish is a constant, right? If you are used to designing systems that work nicely and you get yourself into a problematic area where let's say you had Lambda proliferation, and it was causing you a problem, you'd still want to be able to identify that and scratch that itch or make an improvement to that.
Ìý
It also does make me wonder because I've seen tons of teams get into problems with event-driven systems like that whole push towards everything being more event-driven, that AWS is doing. Is that harmful or do you imagine people are going to tie themselves in knots and need bailing out?
Ìý
Mike Roberts: Yes, and we've been doing that for 20, 25 years. In the end, when you get it right, I think it's the right thing. We're so used to building basically APIs. I'm going to run a server, I'm going to ask it to do something. That's just how we get taught from college up, how to build systems. People know how to do that. Just because people know how to do it and can do it right, doesn't necessarily mean it's the best way of doing it. I think sometimes it's worth going through some pain. Take, for example, Gregor Hohpe wrote a book on messaging patterns 20 years ago.
Ìý
I think he was right to write that book. I think if you build an application, the type of application problem that it's trying to solve, that book, you wouldn't want to use APIs for all of that. It's much more efficient from a number of ways to use messaging-based patterns, but it's still hard. It's just hard because it's not like-- I can say this to you, Mike, from your old snowboarding days. It's like snowboarding goofy, or it's like you're using your different foot forward. It's not that that's a bad thing. It's just you have to learn it. This is one of the things that frustrates me as I start getting older in my career, is telling people that maybe they want to take a couple of days to learn something before jumping in is not met well these days.
Ìý
People want to read a half-hour blog article or watch a half-hour video and want to know everything that they need to know from that to get going. I get that because there's a lot of things that say that you could do that. I remember back in the old days of Microsoft, they used to give you, you could be up and running in half an hour on any of Microsoft's technology. I feel like that's not always the best way, especially when it gets to architecture, and architecture touches so many aspects of an SDL, of a software delivery lifecycle. I think jumping into architecture after half an hour of watching a video, that's typically a terrible idea.
Ìý
Prem: Again, this is related to what Mike was asking in terms of the whole event-driven architecture becoming a bit of spaghetti, especially when you look at it from the perspective of the entire system. What does the developer experience look like? How do I debug? Can I work on my laptop exclusively, run unit tests, and do that kind of thing easily?
Ìý
Mike Roberts: This is a whole can of worms. I used to have a very succinct answer for this, which is, I run all my unit tests locally. I try not to use any services in my unit tests. Unit tests are low-level tests that are just testing my own code, and I'm not relying on the external interfaces of my system. That's what a unit test is. Then the other part of that is I have immigration tests, which there are fewer of, that test my system and how it integrates with its service boundaries in the outside world. That was my nice easy answer. People didn't agree with me because they wanted to do integration testing locally, but that was my nice easy answer.
Ìý
Where that's starting to get a lot trickier is people are starting to develop not locally. They're starting to see cloud-based development environments. Whether that's the S-code running in code spaces or various other tools, people are running stuff in the cloud now. I'm not going to say that those cloud-based development environments integrate really well with cloud-based application services yet, but they will, and they will soon. Very classic case in point, one of the problems with doing local development with Amazon services is you have to think about your permissions of how your thing is interacting with Amazon services.
Ìý
If you are already running something, if you are already running your development environment in the Amazon cloud, it already has a concept of identity with the Amazon cloud. That kind of problem just goes away to some extent. I think it's going to get harder before it gets easier to define what is the developer experience for this.
Ìý
Prem: Again, another thing about the whole architecture aspect of this, given that this is pushing or seems to be pushing us towards adopting event-driven as a first-class citizen, then isn't it fair to assume that you have to have world-class observability apparatus to be able to make sense of the end-to-end?
Ìý
Mike Roberts: I think that people have been getting by without world-class observability and have been muddling through. I do think that things are improving significantly there. Obviously, you have companies like Honeycomb, and you have OpenTelemetry, which they are obviously closely associated with, that Amazon is trying to jump on that. I haven't gone into depth on these things, but it feels to me there's starting to be a consensus of what observability is going to mean amongst all of the providers. The fact is you really do need a provider when you're talking about observability in a production environment. Because there's so much data involved coming from so many places, trying to host that yourself, if you are a small to medium-sized business, is just a nightmare. You really have to pick your observability vendor, which may be Amazon, or it may be Datadog, or it may be Honeycomb, or it may be whomever. What I'm hoping is going to happen is that that is somewhat detached from how you tie your architecture into observability.
Ìý
Mike Mason: I've got one more on costs. Initially serverless was seen as a way to reduce costs because you can scale to zero, and it's not costing you anything. The concern is always the scaling in the other direction. At what point do you have enough traffic going through these things that, from running an EC2 instance, becomes more cost-effective than running Lambdas? Costs in general, how has that evolved over the last couple of years?
Ìý
Mike Roberts: It's a little hard for me to say because the kind of work that I do is typically involved before applications are being run at scale. I was working with a client about four years ago where they were running some higher throughput applications, but they weren't really serverless- based, it was more container-based. Where clients tend to get me in is they say, "Okay, we need our 50 to 100-person developer team up and running on this stuff. Please help us get to a solid point because there aren't any books on it." That being said, I still feel like the Lambda bill is a rounding error on most people's AWS bills still. That might be for various reasons, but I suspect if you were running 10 billion requests a month through your serverless system which is an insane amount, you would start thinking about, "Okay, so how do I optimize this?"
Ìý
There are ways of doing that. If you're starting to think about that amount of traffic, you can think about replacing API gateway with Application Load Balancer, which is Amazon's load balancer, but still use Lambda behind the scenes. For example, if you are using API gateway and Lambda, it is very typical that your API gateway costs are going to be significantly more than your Lambda costs. Again, Lambda isn't really the problem here, it's the other things that are surrounding it that are going to be the problem. I see a lot of companies, medium-sized businesses that aren't serverless that have really gone on the Kubernetes world, and they have 10 to 15-person teams whose entire jobs are to manage the Kubernetes environments.
Ìý
It's a lot harder to tell how much that costs because it's people. Whereas, one of the "problems" with serverless and cloud-based costs is it's very easy to see how much it costs, and people go, "Oh, that's that much money." It's like, yes, that's, just because it's making the costs more visible doesn't mean that they're more than these other things that require 15 people to manage them.
Ìý
Mike Mason: Hey, Prem, I think you can ask your next question, and then maybe we can do a wrap because we're getting towards the end.
Ìý
Prem: Yes perfect. Mike, here is a question for you in terms of serverless. How fine-grained should we or can we go with this?
Ìý
Mike Roberts: In meaning from an application architecture point of view?
Ìý
Prem: Yes. Can I now start implementing specific operations of things? Like let's say I want to cancel an order, I want to place an order, or I want to add items to an order. Can each of those now become these independent functions? Maybe you covered that earlier.
Ìý
Mike Roberts: No, I didn't. That's one thing we didn't get to, and this is actually another thing which I think that is not obvious when people think of serverless. There is a difference to me about what is the service boundary of a thing that I'm making versus how it's implemented. I typically think of, I am deploying a serverless app or a serverless service which has a service boundary. That service boundary is not actually going to look very different normally, whether that's Lambda behind it, or whether that's a Docker container behind it. Now, it might be that I deploy a microservice that's got a Docker container behind it, and it's just one Docker image that runs with one Docker entry point, and it opens up 50 different routes that you can call it.
Ìý
At the end of the day, it's an API into 50 routes in Docker. It might be I have exactly the same service boundary, I deploy it in exactly the same way in CICD. It's deployed as one atomic service, but it just happens to have 15 Lambda functions in it. To me, there is one thing that's missed a lot is people associate the sizing of their Lambda deployment with the granularity of their service boundaries, and that I find is often a big error. You talk about people saying, "I have 10,000 different Lambda functions in my organization, and I have no idea what they're all doing."
Ìý
My answer to that is why are you thinking about 1 function in 10,000? You should be thinking about your 1 application in 100, and then drilling down to the 1 Lambda function in 50 that's within that application. Thinking of 10,000 Lambda functions is just not useful to think of, but this is where a place that I don't think people have got to yet. I think Amazon have done better with their tooling on this over the last few years. They've been trying to present the idea of a serverless application over the last two or three years, but that's really come from the tooling point of view rather than necessarily the application architecture point of view.
Ìý
To answer your question, Prem, if I was building an application, I would architect it with a level of granularity that I would typically do anyway. How I Implement that inside would depend on a number of things. If that was 50 different Lambda functions, I might do that based upon operational constraints. I might use 1 Lambda function if it was basically doing 50 flavors of something that was very, very, very similar.
Ìý
Mike Mason: Awesome. I think on that note, we are running out of time on the podcast. I'd like to say thank you very much, Mike Roberts, for joining us. Mike, where can people find you online?
Ìý
Mike Roberts: The best place to find me online is my company website, , blog.symphonia.io is where you'll find most of my rantings. I have been on Twitter, I've mostly moved to Mastodon now, so you can find me on Mastodon.
Ìý
Mike Mason: Excellent.
Ìý
Mike Roberts: I only have a company of two people, so we are not running our own Mastodon instance, but yes, I've moved most of my social tech chatter to Mastodon. I'm on the Hachyderm instance.
Ìý
Mike Mason: Awesome. Okay, Mike, thank you very much for your expertise today, and thanks, Prem, for your co-hosting.
Ìý
Prem: Thank you very much. It was wonderful talking to both of you.
Ìý
Mike Roberts: Thanks, guys. Thanks for inviting me.