Listen on these platforms
Brief summary
Most organizations today recognize the critical importance of security today. But at the same time, development teams are pushed to move faster, deliver quicker and more frequently, and to behave autonomously. It can be tough to meet these twin demands.
Podcast Transcript
Rebecca Parsons:
Hello everybody. Welcome to the ºÚÁÏÃÅ Podcast. My name is Rebecca Parsons. I'm the chief technology officer for ºÚÁÏÃÅ.
Mike Mason:
And my name is Mike Mason. I am head of technology with ºÚÁÏÃÅ and I work closely with Rebecca in her office of the CTO, and together we're co-hosts for the ºÚÁÏÃÅ Podcast. Today we're joined by two of our security experts, Cade and Ken to talk about DevSecOps. So Cade, would you like to introduce yourself?
Cade Cairns:
Sure, I'm Cade Cairns. I'm a principal technologist here at ºÚÁÏÃÅ. I have a very fairly long career in IT, spanning a lot of development roles and also a pretty good blend of security. These days I'm more focused on the intersection of the two.
Mike Mason:
And Ken, could you say hi and introduce yourself?
Ken Mugrage:
Sure. So I'm Ken Mugrage. I am a technology advocate for ºÚÁÏÃÅ products. I've been working with Continuous Delivery for quite some number of years, nine or 10 years now. I'm also a global organizer for a conference series called DevOpsDays, which is where the term DevOps came from.
Rebecca Parsons:
Thank you, Cade. Thank you, Ken. I'd like to start to talk a little bit about DevOps in general and some of these notions that are floating around, DevSecOps, some of the other. It seems like Dev followed by the kitchen sink followed by Ops is one of those memes going around. But Ken, can you tell us a little bit about DevOps and how you see security and aspects of security and DevOps?
Ken Mugrage:
Sure, happy to. So first off, I think I like to think of DevOps as a portmanteau of two verbs instead of nouns. So it's not developers and operators, it's developing and operating software. It's my impression or my opinion that when you think of it that way, then it's more inclusive. You think about all of the things that are involved with developing and operating software. I'm not a fan of adding more words to the thing, only because I think the intent to make it more inclusive actually has the opposite effect.
Ken Mugrage:
So, for example with DevSecOps, if we say that, "Okay, security is not part of DevOps, so we need DevSecOps," then are we then implying that user experience or compliance or some other department is also not part of DevOps. And so there's a person, Nathan Harvey, who says if you take sugar, baking soda, chocolate, eggs, you don't have sugbakechocegg, you have cake. So I mean that's my thing, is I think DevOps is already inclusive. And when we say this team is practicing DevOps, but this team is practicing DevSecOps, it kind of lets the first team off the hook for doing security.
Cade Cairns:
I think that's a good point, Ken. But at the same time, it feels like security has been absent from the table, I guess in a lot of different organizations that I've worked with. And consequently, it seems like we're hearing a lot more about DevSecOps and building security in and even getting security more involved in delivery these days. Because for a lot of organizations, there are a lot of people, and it feels like a major achievement. I think we're also in a place where collectively our industry is starting to really understand that we need to do more to keep our customers' information safe, and keep our IT estate safe, and looking for ways to do that.
Mike Mason:
That's surprising to me, Cade, to hear that the progress has been so slow given kind of high profile breaches and a lot of at least talk about the importance of security and keeping people's data private. Is it simply so hard to do that organizations are not making progress? Or are there other reasons that it hasn't happened despite the [inaudible 00:04:15]?
Cade Cairns:
I think there are lots of organizations where they're doing a great job of this, or at least where security is more actively involved. But at the same time, the development side of the house or the people doing DevOps are continually moving faster, and they want greater autonomy and they're moving to platforms, and they just keep doing things faster and faster and faster. And really, for most any organization out there, there's a finite number of security people that you have who can help up or try to keep up with all of the different efforts that are going on. As a consequence of that, and probably just of the fact that security has long been a pretty siloed function, it hasn't generally been as close to development as it probably could be.
Rebecca Parsons:
So following onto that, you've talked about the finite number of security people, the limited number of security people. How much of what we're talking about here do you think is simply related to that the scale of technology, the extent to which technology is being more and more heavily used and becoming more and more central to different kinds of organizations? It's not just technology companies, or pure play technology product companies that have to worry about these things, but it's all organizations are having this problem. How much do you think of this is about scale, versus maybe just the speed of advance of technology?
Cade Cairns:
I think part of the challenges definitely scale, but perhaps it's not just about scale, but also about the way that a lot of security tasks have been done in the past. Quite a lot of security testing has traditionally been fairly manual, and while people rely on a lot of great automated tools, I don't think that a lot of them have really been optimized in the same way for repetitive use in a CICD type setting as we would in development.
Cade Cairns:
And between that and the limited number of people, it's a pretty big challenge to scale and to have the same impact as let's say we do with testing that we do as part of our software delivery practices.
Mike Mason:
Something that I saw in a couple of years worth of this state of DevOps report also, was that early on, high performing IT organizations were the ones who were deploying frequently. And we know that the ability to deploy software frequently is helpful in improving responsiveness and usually improving quality. But some of the laggard organizations were kind of following that as kind of a cargo cult sort of a thing, where they would start to deploy more frequently and didn't actually have all of the processes and good levels of automation and testing in place. And so even though they were deploying more frequently, they were seeing sort of actual, more higher failure rate, longer meantime to recovery and all of that kind of thing.
Mike Mason:
What I think was interesting about that though was that lots of organizations saw this whole continuous delivery continuous deployment thing and tried to emulate it. And presumably, the more builds that you try to throw into production, the more stress you're putting on something like a traditional security process.
Ken Mugrage:
Yeah. And to be honest with you, that's one of the reasons I'm not personally a very big fan of that, of the deploys per day or per time period, as a metric. I mean certainly if that's an important thing to your business and you need to enable that, if you're a trading firm, it's fast moving, et cetera, then that's a very important thing. But the pattern you mentioned is exactly, it's right, is people started deploying faster, but worse. So it wasn't just security was other kinds of testing too, where that became the metric. And I think we all know that metrics drive behaviors. So they're, "Oh, I don't want to do security testing in my pipeline, because that's going to take minutes or hours. And I don't want to eat that up and have to wait and so forth." And some of that has to do with pipeline design, which we can go into if we want to later. But no, I think that's the danger of the metrics. We need to find out what is the right release frequency for our business.
Cade Cairns:
And that's certainly important. But in my experience working with a lot of organizations, much of the time we just don't see the security requirements in our backlogs, that would require us to write tests for this stuff. And certainly, what you said is concerning for pretty much any team that I've worked with, as it pertains to some of the larger clunkier security testing tools out there that can take quite a while to run. There's also lots of opportunities or should be lots of opportunities if we had proper security requirements to write a spoke tests, or find other ways to make sure that we haven't introduced security defects or have properly applied security controls. And I think a lot of those times the opportunity is being missed, because we're not having those conversations. And that probably ties back to the fact that well, security hasn't been included that much.
Ken Mugrage:
I think a lot of it is information sharing, and Cade actually hinted on this a couple of times. It's incredibly important to me that these things get done, but I'll be honest, I can't tell you exactly which tools to use or which tests to run. So I'm right in there. One of the patterns I've seen that works a lot. It works well, especially in we, if we have teams that are long running teams, so the whole products not projects thing is to embed folks on that team, at least at the beginning.
Ken Mugrage:
I know we've done it on some projects with auditors and other types of things where we share, "Okay, these are the things we're going to be doing from a technology perspective. This is the tech stack that we've chosen. This is the type of information we're going to be asking of our users. These are the systems we're going to connect with, et cetera."
Ken Mugrage:
The security person can say, "Okay, well in that case, these are the things that we need to test. This is where your attack surface is, this is the platform that you might want to use. This is the whatever." And then together they come up with the plan, if you will. The things that are going to need to be in the pipeline, versus the things that are going to need to be manual. Because, let's not gloss over, this can't all be automated. I'm a continuous delivery freak and I have a good friend Jason, who sits in a dark room and does pen testing and I want Jason.
Ken Mugrage:
And so, it's not a substitute for that kind of thing. But if we can bring that knowledge onto those teams, even if it is the good old matrix organization or a guild or call it what you want, where you help set up that project, help set up what the testing needs to be and then leave and become more of an advisory role.
Cade Cairns:
I totally agree with that. I mean there's some challenges with scale, again, there are only so many people who can realistically devote that much of their time to working with a team. But I'd also point out that security is something that needs to exist throughout the entire life cycle of a project. And so, there's a lot of these really valuable upfront activities that we should do. And I almost consider those to be checklist kinds of activities. I mean, they vary a little bit from project to project, depending on your circumstances and your tech stack and that sort of thing.
Cade Cairns:
But where I think a security specialist or a security subject matter expert delivers the most value, is in how they empower engineering teams, development teams to do this stuff more themselves. And there are a lot of things that I would probably never consider to be the responsibility of a development team, unless you're happen to be lucky enough to have some folks on it who are really passionate about security. But there are a lot of things that I think we realistically should expect them to do. The problem is, we've never really asked them to do it in a lot of cases, and going back to a point that you made a second ago, I think a lot of the time, we just don't know where to get started or what the right thing to do is. And just having some of that insight coming from somebody who has experience in this area, who understands what good security looks like and how to incorporate it throughout development, is a good starting point. However-
Mike Mason:
Can you give us-
Cade Cairns:
Sure.
Mike Mason:
Can you give an ... I mean, Ken just talked about his friend Jason, who sits in a dark room doing pen testing, and obviously that's a fairly specialist role. I think it's clear that keeping up with the latest pen testing stuff is a very, very specialist thing.
Cade Cairns:
Right.
Mike Mason:
But what things should the teams be doing? Are there a few low hanging fruit type things that all development teams should be incorporating into their ... Just give us an idea. Maybe, I mean this isn't going to be an exhaustive list, but just to give listeners kind of an idea of the kinds of things that we think most teams should be doing themselves.
Cade Cairns:
Well, from an automation standpoint, there's a wealth of different automated tools that exist out there to, let's say, try to identify common application security defects in the software that we're building. Or let's say we're deploying to cloud infrastructure or a container orchestration system or things of that nature. There are checklists that tell us how to do these things safely and securely and how to set them up well. There's automated tools that we can run over them that will tell us if we've left glaring holes in our configurations or have made mistakes.
Cade Cairns:
Every organization should have a tool that, for instance, will tell them if they've left their Amazon AWS S3 buckets open, because that alone has been the source of many breaches, or data disclosures, I should say. There's many really basic things like that, that are quick wins that I think are very much within the realm of capability and responsibility that delivery teams could take responsibility for, and just get in place from day one. Even if they're not necessarily the ones who are downstream from all of the information that it generates and maybe that goes to the security team or maybe that goes to somebody else. It's helpful stuff to have.
Cade Cairns:
We gave a talk at ºÚÁÏÃÅ XCONF event on security from day one, and covered a bunch of these concerns and it's just really basic, quick win type things that teams can do. It's not always clear where we can start, but there's a pretty short list of stuff that can actually be pretty impactful, at least to get things going in a positive direction.
Ken Mugrage:
If I could. I think another quick win is to stop using public repositories. I see a lot of events that are built on, we always pick on container images, but that's one. Or it could be [inaudible 00:15:48], it could be RPM, could be what have you. Is that, who do you trust? And if you're building on something that's public and you don't really know what's in it, then you don't really know what's in it.
Cade Cairns:
Yeah, and that's certainly scary. I thought the point about the person in a dark room was an interesting one. Kind of goes back to one of the challenges that we have. There are so many different types of security specialists, first off. So I wouldn't necessarily expect the person who is incredibly strong at penetration testing to come over and help plan for a secure delivery with a development team. But I think the dark room thing is fairly interesting to think about, because a lot of the time we see security teams that have traditionally had been fairly siloed in sort of organizations. Almost every single client that I've ever worked with, every organization that I've ever worked with, has a relatively small assortment of people. And those people have a huge number of tasks trying to keep the organization safe, trying to deal with all of the workstations, trying to deal with all of the software that they buy, trying to deal with their cloud operations.
Cade Cairns:
Then we have high performing teams, practicing DevOps and trying to go faster and faster, and we're saying, "Yeah, keep up with that as well." And it's a lot to keep up with. And unfortunately, if you're in that room and you're not used to being on the ground working with the people who are working on custom software, or who have asked to have relative autonomy in the infrastructure that they control, they're not going to benefit from your specialized knowledge and your point of view on risks to your organization. And of course you're not even really going to know what they're up to.
Cade Cairns:
I don't really place any blame for that on any group of people in any particular role, because it's been that way for a long time and both sides have almost differing interests. Security is very focused on making sure that ... or they're focused inwards inside of an organization and making sure that we stay safe and secure. And teams that are delivering value are focused outwards and trying to push stuff out as much and as fast as they can.
Rebecca Parsons:
Wait, which again gets us back to the role of automation. Because the more we can automate these security checks, you're using your security expertise once to decide what should be in there and to the extent it can be automated. And then that can scale across however many teams, however many deployments, and it removes the bottleneck. So what's your sense on how we're doing, in terms of advancing on automating more and more of these tasks? I mean this is some something that has been part of the continuous delivery, continuous deployment, it's automate all of the things. And we heard for a long time, "Oh no, you can't automate that. You can't automate that," and we've been advancing. How's your sense about the security community's stance? And whether those things maybe if we looked at it differently, would be easier to automate?
Cade Cairns:
I think that there are many, many opportunities to automate things that are identified as potential security risks or security controls that we've been asked to build into our software or into our systems. But part of the challenge is that they don't always exist as formal requirements. Because we don't always have a security specialist or somebody with that specialized knowledge, who's able to give us insight into what we need to be building in. That's to be expected.
Cade Cairns:
As an example, we were working with an organization some months ago that had something like 8,000 developers working on things at all times. And one of our other consultants inside of ºÚÁÏÃÅ said to me, "Well, we can't possibly expect the security team to provide high touch for everybody. There's simply too many people to keep up to scale out to that."
Cade Cairns:
So going back to the checklist thing that we were talking about earlier, everybody should at least have some baseline of automation that we're applying across all of our projects. And for those ones that are higher risk, higher significance, depending on their relative value to the business or relative risk, I think that we really need to think about ways to bring those specialists in a little bit more, so that our requirements can reflect what needs to be built in. Out of that, we'll get better tests. But it's pretty challenging to see that happening very much until ... And it goes back to what I was saying earlier. If we don't have people on the team with specialized knowledge about security, which I think a lot of the time we don't, people don't know what they don't know, and it's pretty hard to build that in.
Ken Mugrage:
I think this is one of the areas where automation actually can help depending on how it's applied. So, one of the biggest, I guess anti-patterns, but that I see a lot in continuous delivery pipelines, is people trying to do everything in a super linear fashion. So first I run the unit tests, and then I run the functional tests, and then I do this and that and the other thing. And somewhere in there is security and somewhere in there is compliance, which has a couple side effects.
Ken Mugrage:
First off, if the pipeline doesn't get that far, then those tests don't get run. And so if they have things that are failing somewhere earlier, then it's just that lap, they don't run often enough. By the time they are found, it's like, "Oh now it's tech debt, and now we have to negotiate with a product owner," or what have you and et cetera.
Ken Mugrage:
That's one thing I think that platforms can bring to the table. And I understand it's a staffing issue. I think it's a massive staffing and learning issue; we know we need more people with this knowledge. But with modern continuous delivery as opposed to continuous integration, we can't do things that say, "If you're doing a Java tech stack and you build a jar, then I'm going to set up my own pipeline. And every time your jar is done, I'm going to suck it into my pipeline. That I'm part of the security team and I'm going to run a bunch of things on it, and I'm going to tell you about a lot of them and I'm not going to tell you about others, because I want to get rid of unconscious bias.
Ken Mugrage:
Meanwhile, your unit tests and functional tests are still running. But I'm going to suck that jar into mine every single time. Give you a dashboard, give you visibility, et cetera, and help you when something goes wrong, not if, when. But I'm not going to make your pipeline longer doing it or I'm not going to make it so you can't do user acceptance because my tests are still running, because they might run for a longer time, et cetera."
Ken Mugrage:
Excuse me. We see people using Parallel all the time to do like Chrome and Firefox. But you can do Parallel for more than just tests of the same type. And you can do, so it's different code bases. So people that do have that expertise can say, "Okay, if you're going to create a Docker image, then I'm required to set you up as," the tool I use the most, we call it a material. "But if I'm going to have a Docker image, then every time you create a new image, I'm going to pull it in and run this stuff on it." I think that might help in some place. And it's something I don't see very often.
Mike Mason:
And I think that gets back to some of the stuff that you were mentioning earlier about not necessarily trusting repositories on the internet. Because people did this with just, I don't know, library files for a long time. And now we're doing it with entire, Docker images, which is downloading somebody else's stuff from the internet and running it effectively. I mean, is it true that the security stance on a piece of software maybe moves a little bit more slowly than the individual lines of code level? Like if it takes a while to scan an image, maybe that's okay, because you're not creating images every single time you check in or you're not completely redoing stuff every time you check in. So because that's an even dependency analysis, right? I mean, that's a slower moving target, so it's okay if you have a little bit of processing time?
Ken Mugrage:
And I think that's what I'm saying, is that if you do, think of it as like a diamond dependency. So if we build the jar and it started security scan and it's also started unit test, the unit test stage of my pipeline might run 10 times for every one the security does. But I can set them up as a blocker using diamond dependency fan in, fan out, we call it, that kind of thing. And say, those tests of that type might run every 10 times for every one of mine. So only 10th thing can get to production, because it is a blocker to production. But we're still going to give the development team fast feedback.
Ken Mugrage:
And then, I mean, probably getting a little bit out of scope. But I can put in short circuit things. So if I do have an emergency when the jar is done, I can skip the things and deploy it. Letting this thing still run then after the fact, tailing instead of forward, is all kinds of things you can do, but and that's the whole, I guess, advantage of running entire pipelines in Parallel, is that they don't finish at the same time. I don't have to wait for the security test to be done before I can move to the next step.
Cade Cairns:
I was going to say those are really great points Ken, and I would love to see more people talking about how to get all of these things running in Parallel in a successful way, and make it as frictionless and as easy as possible for the team to just get these things in place. Those tools that you're referring to are very much of the category of the type that I was talking about earlier. Things that every team should simply have as part of what they do, because it will enable us to find fairly common defects or tell us when our dependencies are out of date, or when our software is out of date, and really reduce the cycle time to the team being able to do that, which is ultimately where we want to go.
Cade Cairns:
But there is a lot of security concerns that we want to think about beyond that as well, like the defects that we write into our software. Or perhaps things that we do unintentionally when we're modifying our infrastructure or bringing up new containers or things like that. Where ideally, we have some set of security goals that somebody has helped us figure out or maybe we've done ourselves, and we need to write automated tests to make sure that what we've done remains of high quality and isn't introducing new risks to the organization. And that's where I think we want a little bit more help from specialists when it makes sense to do so.
Cade Cairns:
The one other thing that we hadn't spoken about, that Ken mentioned a second ago, is visibility and feedback loops and things like that, which I wouldn't mind touching on if we can try to fit it in really quickly. Because I think that's probably the most important thing that's absent today for a lot of teams out there. It's really hard to get feedback on whether you're doing the right thing or the wrong thing for security. And until we improve that through reporting from tools, from hearing back from people doing tests more often, or from things that are actually happening in our production environment, it's pretty hard to expect teams to get any better at this stuff.
Rebecca Parsons:
So we're getting close to the end. I'd like to ask you each to identify what do you think is the most important thing for a team to do to get started in this?
Cade Cairns:
I've spoken about the fact that a lot of the time the security team is siloed or perhaps a little bit harder to get hold of, and maybe haven't traditionally been that involved or engaged with the custom delivery that we're doing. I think it's really important just to try to form better relationships. In fact, I think it's very important for both sides to form better relationships and try to get a better understanding of what each other are doing, what allows each other to do their jobs successfully. And ideally from that, not only gain empathy, share our knowledge, but also I think that by collaborating with development teams, security teams will learn a great deal about how to automate tools better, how to improve the general state of the many tasks that they need to do, and hopefully find ways to automate some of those things away. And development teams will benefit a great deal just from having greater contact with people who have a little bit more specialized knowledge to make things better.
Ken Mugrage:
I'm going to cheat, because I know you said one thing. I'm going to say visibility, but part of it's visibility into that information. So that's the cheat, is how can we share that information and whether it be podcasts or what have you, even internal ones, these are the things we're going to do. But then also too, visibility from a technical standpoint of dashboards and whatever. "Here's the test we're running, here's the ones we're not running, here's how we're scoring, et cetera." Even the simple things, there's tools out there that say what percentage of your application is using external artifacts, and have those past tests, and et cetera. There's some pretty scary statistics out there on things like Maven repositories and so forth.
Ken Mugrage:
And so, maybe if we have dashboards for the sake of dashboards, pretty lights don't doom anything. But if we can actually show folks, "Here's the things that you could be testing and here's what are testing," even if it's from that perspective, kind of an audit of the pipeline, if that makes any sense. "Here's the testing tools you're running. You're not running these seven that are available to us. Let's talk about that."
Rebecca Parsons:
Well, thank you Ken. Thank you Cade for that fascinating discussion on security. And I didn't come away too terrified, so that's probably a good thing.