#154 July 15, 2021

Gatekeeper and Policy Controller, with Max Smythe

Hosts: Craig Box, Jimmy Moore

Gatekeeper is an open source project which lets you enforce policy in a Kubernetes cluster. It’s also the basis for Policy Controller, a hosted and managed version now available for all GKE users. Max Smythe, a senior SWE at Google, is a maintainer of Gatekeeper and the TL of Policy Controller. He joins us to talk constraints, config and Cruise.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Craig Box.

[MUSIC PLAYING]

CRAIG BOX: I am once again joined by our producer, Jimmy Moore. Hi, Jimmy.

JIMMY MOORE: Hey, Craig, thanks for having me back. And I will tell you, my inbox has been blowing up since our last show. People are just begging me to guest host their podcasts.

CRAIG BOX: Well, I hope I'm getting a finder's fee.

JIMMY MOORE: Absolutely.

CRAIG BOX: Here in England this week, football did not come home. We made it to the final of the Euro 2020 championship but lost on penalties, which seems to be England's cross to bear, really, as a nation that won the trophy once in the '60s and really hasn't won anything since.

But one of the more interesting things about the televised experience is hearing these stories, apocryphal stories perhaps, of electricity usage. Everyone's sitting on the edge of their seats, watching this game. And then halftime happens. And the story goes, they all get up and put the kettle on. And you can actually see-- there's a graph that I'll link in the show notes-- showing electricity usage. And it just spikes up at halftime when people are either putting the kettle on or, more likely perhaps, going and opening the fridge and getting a beer out.

But according to statistics I see here, it was in the top 15 of such events. There was the World Cup semi-final in 1990. It was the largest pick-up for a sporting event. And number three was a TV show called "The Thorn Birds." Have you come across that before, Jimmy?

JIMMY MOORE: I know about "The Thorn Birds," yes.

CRAIG BOX: Could you explain it for our audience?

JIMMY MOORE: Mm, It's actually kind of controversial. It's a miniseries from the '80s about these Catholic priests and the challenges of such a career. I guess that's what I can say about such.

CRAIG BOX: I've never heard of it. But apparently, it was the second biggest miniseries ever in the US, something to that effect.

JIMMY MOORE: It was huge, next to "Roots," I believe.

CRAIG BOX: Yes. And then apparently, people would go and make a cup of tea at the end of it or otherwise engage their electrical systems.

JIMMY MOORE: Yeah, it reminds me of, actually, here we have the Super Bowl in January. And there's always this myth every year, where they talk about, oh, the great Super Bowl halftime flush breaking the sewers. And people think everyone gets up and goes to the bathroom. It doesn't happen. But it's really a great myth to imagine that sewer engineers all over the country are freaking out at halftime during the game.

CRAIG BOX: I remember the opening ceremony for the Olympics in Vancouver. There was a giant-- call it a fountain from the middle of the field. And now, I'm trying to imagine the idea of a Super Bowl halftime flush and how they could bring that as part of the halftime show.

JIMMY MOORE: Well, now you're talking about my kind of sports, Craig, absolutely.

CRAIG BOX: Well, as an event planner, you're going to be watching a lot of these events and having a very different eye on things. A lot of people are going to watch the screen growing up thinking, I could be a footballer. And I think you're watching it thinking, I could do that opening ceremony.

JIMMY MOORE: Oh, man, yeah, I mean-- and they're about to do the opening ceremonies in Tokyo. It's going to be much different. They're excluding all spectators from all of the sports. But for me, I think about the opening ceremonies. And there's always some kind of inclusion of lights or projection or some ways that these 50,000 or 60,000 people participate. This year, it's going to be a lot different. I'll be interested to see how they pull it off.

CRAIG BOX: I went to a Coldplay concert once. And we all were given a wristband. And at various times in the show, they all lit up in different colors. So I could imagine that they could just give the wristbands to the seats and not have the people in them.

JIMMY MOORE: Yeah. I went to a theme park once. And you put this hat on your head that changes colors. And you can't see the hat that you bought that changes colors, but you can see everyone else's. It's really you're just contributing to the big picture for everyone, right? So maybe that's some kind of entertainment socialism. I don't know.

CRAIG BOX: It does sound like that "Hedbanz" game that we all used to play. You put a card on your head. You don't know who you are, but you know who everybody else is. Let's get to the news.

[MUSIC PLAYING]

CRAIG BOX: For the past few releases, Kubernetes has been deprecating various beta APIs as the replacements have graduated to v1. In the upcoming 1.22 release, some key APIs are having their old versions removed. This means if you are using pre-GA APIs for things like ingress, CRDs, or web hooks, things will stop working. A blog post on kubernetes.io spelled out what you have to be aware of and include steps to upgrade old manifests to use the new APIs.

JIMMY MOORE: Y Combinator startup, ContainIQ, has launched a new Kubernetes-native monitoring service. It launches with three dashboard categories for Kubernetes events, pod and node level metrics, and service latency, the latter powered by eBPF. The service is priced at $250 per month for up to 50 nodes. And you can have the first month for a buck.

CRAIG BOX: Postgres vendor, Crunchy Data, builds one of the more popular Postgres operators for Kubernetes. And this week, they have announced version 5.0. Having started out as a very imperative tool, PGO 5.0 moves to become truly declarative using server-side apply and can be better integrated into a Helm, Kustomize, or GitOps workflow. It's available in both an open source and a commercially supported version.

JIMMY MOORE: NetworkServiceMesh has reached version 1.0. Where a service mesh usually focuses on layer 7 packets, NetworkServiceMesh focuses on layer 3 and is indeed better referred to as network service mesh. It was designed for telco network function virtualization use cases. But it can be used to sit under a mesh, like Istio. Version 1.0 uses Wireguard as the default transport, offers much reduced latency, and comes with a new website.

CRAIG BOX: Google Cloud has launched Certificate Authority Service to general availability. The service, first previewed last year, allows you to run your own private internal-managed CA with integration with popular tooling, like cert manager, Terraform, and Vault.

JIMMY MOORE: Platform9 launched a managed Kubevirt product, allowing you to run VMs on their container platform. It is separate from, but runs on top of, their managed Kubernetes platform, with both services promising a 99.9% SLA.

CRAIG BOX: Security company, Rapid7, acquired both Alcide and DivvyCloud in the last 18 months and has this week launched a new service based on the merger of those two products. InsightCloudSec combines posture management, identity and access management, infrastructure as code analysis, and Kubernetes protection. And Rapid7 says it will help users shift left or move security closer to the start of the deployment pipeline.

Also in the security acquisition space, Linux and container security company, Capsul8, with the number eight, was acquired by Sophos.

JIMMY MOORE: And finally, congratulations to the 28 interns who just graduated from the latest CNCF mentorship program. The LFX interns worked on 16 different CNCF projects. And many of the graduates tell their story in a blog post you can find linked in the show notes.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

CRAIG BOX: Max Smythe is a software engineer at Google, where he's a maintainer of the Gatekeeper open source project and the tech lead for the Policy Controller in Anthos Config Management. Welcome to the show, Max.

MAX SMYTHE: Thank you. It's good to be here.

CRAIG BOX: You have a double degree in theater and physics. Is that a common combination?

MAX SMYTHE: Not where I went to college, at least, but I did like that there were common interests across the different professors. I had a physics professor who liked to play violin in the orchestra. And a lot of the theater professors were interested in the fancier parts of physics, like some stuff about astrophysics, how orbits work, things like that.

CRAIG BOX: Brian May of Queen is famously a rock star and an astrophysicist.

MAX SMYTHE: Is he really?

CRAIG BOX: We could do an entire podcast talking about Brian May, but let's focus back on you here, Max. It would seem the obvious thing to make the most of both of those skills would be to work in something like visual effects.

MAX SMYTHE: I did work in visual effects out of college. I graduated mid-recession, so out of college is a bit of a misnomer. About a year after graduating from college, I worked at Sony for a while. I was what they call a production services technician, which basically meant I helped wrangle the render farm, handled data ingest, that sort of thing for various projects and does that for 3 and 1/2 years or so.

CRAIG BOX: You worked on the movie, "Edge of Tomorrow." Why is there any question as to whether or not that was a great movie?

MAX SMYTHE: Bit of a secret, I never watched it.

CRAIG BOX: [SIGH] Come on, that's a great movie!

MAX SMYTHE: [LAUGHING] I hear that. I hear that. It's one of those movies that, apparently, would have been right up my alley. It was also the last movie I worked on before I left.

CRAIG BOX: Good thing about movies is, generally, you can find them somewhere and go back and watch them later on, if you didn't catch them the first time around.

MAX SMYTHE: This is true. And I have had many years to do that. And somehow, I've not.

CRAIG BOX: We're building up a list of things we need to school you up on here. But why would you give up the glitz and glamour of Hollywood for more generic platform programming work?

MAX SMYTHE: So glitz and glamour may be more of a veneer.

CRAIG BOX: Maybe it doesn't apply to the wrangling of the data?

MAX SMYTHE: Right. Right. No, it was a lot of fun. Career-wise, I would say it was a lot of stress, particularly around that time. It was kind of a per-project basis kind of thing. So at the end of every movie, you weren't sure what would happen next. My then-girlfriend, now my wife, she wanted to go off to school, so that was a good opportunity for a little bit of a shift. She had friends in the Bay Area. And so that gave me room to start working at a startup, which is where that transition happened.

CRAIG BOX: And that led to your career at Google. I understand you were an SRE here for a while?

MAX SMYTHE: That is right. I joined Google mid-2016 and sometime, I want to say like late 2018, shifted over. I was a SRE in security services, which is a little bit of an odd branch in some ways. There's definitely high QPS security services, which is something that SRE is very well known for, scaling and reliability at scale. But there's also lower bandwidth security services, where you're starting to get into a little bit more the cost of being down and what that could do versus the actual we-need-to-serve-10-million QPS. And that was my focus, which got me a lot of experience with-- if you read the SRE handbook, they talk about Chubby, which is this distributed database that's based off of Paxos that is used for reliable storage.

CRAIG BOX: It's effectively the etcd to Borg.

MAX SMYTHE: Yeah, I think that's fair to say. I don't know, honestly, much about how etcd is arranging itself. I know it's distributed and has nodes, effectively, yeah, that sort of distributed system. And also back in my startup days, I took a little bit of a deep dive into a database called Riak, which is loosely based on Amazon's DynamoDB paper, if I remember correctly.

There were a lot of cool programming concepts there, particularly around eventual consistency, like what happens if there's a network partition. And so half of your database cluster loses contact with the other half. How do you resolve that divergence state, which is super helpful for Kubernetes resources, as it turns out, particularly with this web troller model that Gatekeeper uses.

CRAIG BOX: What lessons did you bring from security SRE into your current work in policy?

MAX SMYTHE: One, I would say a little bit of, let's call it, caution. I think it's very true that when you're dealing with security, by definition, you're dealing with the fringe probabilities.

CRAIG BOX: Right

MAX SMYTHE: You're not necessarily thinking about, 90% of the time, this will work well. You're thinking about these probabilities. Someone may be trying to effect these probabilities. And to a certain extent, you can't overcome that. Nothing's unhackable. But you definitely want to, at least, give a good faith effort to figure out, OK, is what I've written pre-broken, and also, I would say, how to manage some of these security processes, because a lot of what makes a protocol either secure or manageable or both is how it composes.

Misconfiguration, my understanding is a large part of most security incidents or, at least, a common cause of many security incidents. The easier it is to understand what's going on, what the impact of your changes might be, and the less possible it is to describe something that would have a broader scope than you might have intended, the less likely you are to have negative outcomes.

CRAIG BOX: You are a maintainer of the Gatekeeper open-source project. I understand that to be a policy engine for Kubernetes. But how do you describe Gatekeeper?

MAX SMYTHE: Gatekeeper is a way of expressing policy in a granular KRM-centric form that is basically constraints that limit what is allowed by users in Kubernetes.

CRAIG BOX: When you say a KRM form, that means you're defining the policies as Kubernetes objects, effectively as customer resources?

MAX SMYTHE: Correct. So you could work with policy the same way you would work with any other Kubernetes resource.

CRAIG BOX: Now, this week, the Anthos Config Management product, which was previously only available to Google customers as part of the Anthos package, is now available to all GKE customers. There are two different halves to it. There's a config half and a policy half. And I know that you used to work on the config half. So could you tell us, first of all, a little bit about Config Sync?

MAX SMYTHE: Config Sync, that was the original project I worked on, actually, after SRE, so passing familiarity. But that, basically, is a way of shifting your whole pipeline left in terms of configuring Kubernetes, so the idea being you can use Git as a source of truth or other code repositories. And you can configure many different clusters to observe the state of these repositories and conform themselves to that intended state.

Ultimately, it allows you to orchestrate your entire set of clusters as a single entity, rather than dealing with each cluster separately. It also allows you to deal with your policy statically in these Git repos, which opens up a lot of different possibilities in terms of now you could do pre-commit checks to make sure that certain invariance aren't violated or to analyze just how different would this change be before committing that change to production.

CRAIG BOX: Is it fair to call it a GitOps tool?

MAX SMYTHE: I would say, yeah, 100%, actually definitionally GitOps, because it's keeping people off of kubectl, which, if you're using kubectl, there's less of a paper trail. Kubernetes does have an audit log that you can engage. But it's certainly not going to be to the granularity that you would get with a Git repository.

CRAIG BOX: So we have tooling which allows us to define not only what we want for a single cluster, but some inheritance about how we want to define things in multiple clusters. Then we also, now, have the Policy Controller, which you, of course, are the tech lead for. That sounds a little bit like the Gatekeeper open-source project. How do those two projects relate?

MAX SMYTHE: They're very similar. Policy Controller is built off of Gatekeeper. They share most of their code base. There's a slight difference with regard to something we call referential data, where Policy Controller is a little more concerned about the impact of eventual consistency on policy enforcement and making that a bit more clear to the user. But the main difference is what gets bundled with it in terms of there is a constraint template library that gets shipped as part of Policy Controller. Now, open source has its own constraint template library. And those constraints are in there. But there are Google author constraints in the Policy Controller distribution as well.

Another key difference is we handle the installation of Policy Controller/Gatekeeper for you. So there's more hands-on approach in terms of, if there's an upgrade, it should just work versus the need to follow whatever upgrade procedure might be required in open source.

CRAIG BOX: We spoke to the team from Styra, who builds the Open Policy Agent, OPA. And Gatekeeper, as they describe it, is a way of applying those rules to objects in Kubernetes by way of running an admission controller. What are the actual things that someone might be controlling using Gatekeeper?

MAX SMYTHE: A common use case here that comes up a fair amount is PodSecurityPolicy.

CRAIG BOX: But that's a Kubernetes-native thing. You don't need a Gatekeeper for that.

MAX SMYTHE: Correct. However, it is not going to go GA.

CRAIG BOX: Right.

MAX SMYTHE: There is a PodSecurityPolicy v.2 proposal out there. I'm not 100% sure on what the current status of that proposal is.

CRAIG BOX: I believe the proposal basically says, "use Gatekeeper for this use case".

MAX SMYTHE: OK. And there's also a more agnostic thing that I think SIG Auth is proposing. I'm forgetting the name they went with but, essentially, security profiles, where you have your low security let whatever you want to happen, medium security the default, and then higher security, where it's more restricted but supposedly safer.

CRAIG BOX: But if we generalize this and say there was something in Kubernetes that was defined as a way to define a set of policy, and then it turned out that that wasn't descriptive enough, and the answer is, basically, use something with a domain-specific language, where you can do Turing-complete work, I assume, and define anything you like, is that likely to be true of everything you might want to set policy on? Does it make sense not to worry too much about getting the minimal case right and go straight to the programming?

MAX SMYTHE: The solution is never going to be 100% either way. And that was one of the issues that PodSecurityPolicy had. It was not scalable to try to plug everyone's use case into built-in released Kubernetes codes. There's also that lag time of, well, it's baked into the next Kubernetes version, but when is that going to become available to the user?

CRAIG BOX: Right

MAX SMYTHE: On the other hand, there was this simplicity in PodSecurityPolicy. It came with Kubernetes. It had documented configuration knobs with well-understood impacts that everyone agreed on. And it was like this locus of support, where, if someone had a problem with a particular policy, the community could figure out the best way forward, fix the bug, change the expression, what have you. That's the benefit of prepackaged policy is that locus of support and that documented configuration knob. The part where it isn't 100% is, as I said, no company can 100% fit into these pre-built boxes. So they need some room to adapt for their specific use cases.

What Gatekeeper tries to do and its stance and the answer to that question is that take the prebaked solution where you can, take that simplicity where you can. If you can start with that, go for it. But if you hit a limit where you're no longer able to work with the pre-built solution, there should be a path forward. And that path forward, hopefully, does not involve you needing to write completely new infrastructure, a new webhook, new monitoring, new ways of reporting violations, to the extent that you've shifted left. How are you going to keep your enforcement consistent, both at the edge with kubectl applied and with your shift left pipeline, all that stuff?

CRAIG BOX: Gatekeeper comes with CRD's for defining policies in terms of the constraint templates that you've mentioned before and then a different resource for instantiating those policies that are called constraints. Why are those two things different objects?

MAX SMYTHE: The main reason to keep these objects different is to allow for this prebaked experience that, otherwise, would not be possible. If you combine code and policy into the same object, then it becomes less likely that that policy is portable across companies. You have solutions where you can store special global variables and substitute the values. You have other solutions where people just rewrite parts of the code in places. In some ways, it's very similar to parameters, which constraints use to configure their response.

But the key difference, I would say, would be shareability. If I have three constraint templates that are homegrown at my company for very specific use cases, and then I'm taking 10 constraint templates from some open-source library, I'm taking another five from some other team in my own company, and so on and so forth, those are designed to be non-interactive with each other, so you should just be able to combine them. That wouldn't be true if you had just straight code. It's possible that you'd have name collisions. It's possible that those name collisions could have bad effects, where one policy negates another policy, because some function's behavior is now totally different than it, otherwise, would have been.

And you don't have that well-defined interface, which is the parameters that sets that expectation of I expect this value to be a string. And maybe, it has some string validation. So that strict type checking, one, provides a little bit of safety. It provides some immediate feedback for the user of "I have misconfigured this field" just a little bit. And that's getting better with CRD v1's with the structural schema. Before, unknown fields would be allowed. That's less likely to be true in the future, not quite there yet.

The other benefits of the schema is that it documents what control knobs are available to you without needing to be able to go through the code to understand whether it's Rego, whether you're writing something in Go or whatever other policy languages. There's a lot of them out there. You don't need to be cognizant of the details in order to know what you could do with the policy.

CRAIG BOX: Having all of these policies in the library obviously makes it easy to compose them together when you have different parts of a system that you want to apply restrictions to. How do you think about an organization managing them at scale when they have different teams who might be responsible for different parts of the policy?

MAX SMYTHE: Scale is a huge challenge. And in some ways, that was one of the defining use cases for, I don't want to say, Gatekeeper, but the design of constraints and templates, just generally. If you're a large organization, the expectation that everyone is on the same page at all times is probably not a reasonable expectation.

CRAIG BOX: They're not even in the same book.

MAX SMYTHE: [LAUGHS] Exactly. Exactly. If you think about how policy is described, one common model of describing policies like a flowchart, where, if A, then go down this branch of B. Go down this other branch. And eventually, you hit terminal nodes, where you reach a concrete decision, allow or deny.

What, to me, is interesting about that model of policy, which, by the way, is very valid-- and you can express some very complex things with it-- but it also means that you could change one note in that flow chart. And the entire policy will behave very differently, just based off of what else is there. For example, if you're allowed to express a policy that just says, at this point, allow everything, then all the rest of that alternate branch is being ignored. Unless you know what has been ignored, you don't know the impact of adding that single policy. So the scope of impact, for that change, greatly exceeds what you might expect it would be. This is one small change.

CRAIG BOX: The proverbial butterfly flapping its wings.

MAX SMYTHE: Butterflies are pretty. But they're also bugs. We don't want them in our code-- or chaos monkeys. We want them in our infrastructure, but our code should keep them out. That was a terrible joke, by the way, I apologize.

CRAIG BOX: I'll allow it.

MAX SMYTHE: Thank you, Your Honor. The thing with constraints is that they are attenuative, where, essentially, whenever you add a new constraint to a body of constraints, the only possibility you have is either that constraint is duplicative, and it does nothing, or it further restricts the allowed states that your system is able to have. You now know, if I am bringing in a bunch of constraints from department B and mashing them together with department C, B's and C's constraints, C's policies are not at risk. B's policies are not at risk of this short-circuiting effect. This is helpful if you have a Git pipeline, where you may literally be combining these policies. This is helpful for I don't need to look at changes I made a year ago to know that adding this constraint-- the only thing I need to worry about is am I blocking an existing state that should, otherwise, be valid.

CRAIG BOX: Now you mentioned your Git pipeline there. There are a few different places that you can enforce or validate policy. You can do it when code is committed. You can do it when the YAML is applied to your cluster through the Config Sync tool through the webhook. And then you can also audit afterwards. Which of those is the right time and place to apply or think about policy?

MAX SMYTHE: All of them, I would say. It sounds a little extreme, but each one of those different enforcement points has their own strengths and weaknesses. If you have shift left, and you're enforcing at this time of stasis, at the Git Apply time, there's no eventual consistency concerns. You don't have to worry about, oh, what if only 50% of my policies were loaded if I rely on we call it referential data. But, essentially, if I want to verify that the labels on some object are unique or something, I don't have to worry about, oh, maybe only 30% of those objects are loaded, so the uniqueness check isn't right.

All of that stuff is not a concern, which means you're more free to operate more complex policies there. Also, it's less time sensitive, so you can have more computationally complex policies. And it's not affecting production at that time. So uptime is less of a concern. In fact, it might not even be a hosted service. It could run a scripted code, so lots of benefits. But if you provide users access to kubectl in your cluster--

CRAIG BOX: Then you're a bad sysadmin. And you should go sit in the corner.

MAX SMYTHE: Right. Or you've had an outage--

CRAIG BOX: True.

MAX SMYTHE: --and someone needs to go sudo and--

CRAIG BOX: There's always exceptions.

MAX SMYTHE: Yeah. Best practices are good, but they're not always realistic. Super users are a pretty accepted thing, grudgingly but accepted.

CRAIG BOX: Spoken like a true ex-SRE.

MAX SMYTHE: Right? There's that concern, shall we say, for the details. But if you provide kubectl access to a cluster, then none of that policy is enforced. And particularly, if it's an emergency situation, no one's looking through policy books to say, wait, hold on, is this the appropriate way to do something? They're concerned with the uptime or whatever, data loss, lots of different things. So having that admission webhook as this automated guide of maybe you don't want to do that because x, y, and z bad consequence helps, guides, the responders without the need to put that burden on them of always having policy space in their head so they could just focus on the problem.

It also doesn't address policy drift. If, for instance, there is old state in your cluster that somehow didn't get cleaned up, shift-left checks are not going to be aware of that state. And they're not going to warn you about it. kubectl and admission webhooks, they're only concerned with that ephemeral point in time when that kubectl request is made, so they're also not going to be concerned with that state. They're also not going to be doing a better job than shift left in terms of contextual awareness, because Kubernetes is eventually consistent. So there might be stuff they're not aware of.

So really, at that point, even though audit has a downside in that the violation is already present on the cluster-- and that's the least ideal time to catch something is after it's already happened-- it allows people to catch things, like this pre-existing pod that has been around since Kubernetes has been around is now in violation of this new policy I have added or want to add. Whether that's a problem for you in terms of now we need an exception, or that pod needs to go away, that's a separate discussion. But at least you're aware of this situation. And you can have that discussion.

CRAIG BOX: If I'm specifying these policies as Kubernetes resources that I create in the cluster, what is actually watching them and updating the webhook to make sure that they apply as I create new resources?

MAX SMYTHE: That was a pretty unique challenge, I think, in building Gatekeeper and where that distributed database knowledge was, I believe, helpful. Gatekeeper itself is a weird mix of a webhook and a controller. So there's a built-in controller in the process that has a watch registered with the Kubernetes API server that is saying, OK, there's a new constraint template. Let me ingest that code. Let me make sure that that's recognized. If it's missing, let's add a CRD to Kubernetes so the user can now register constraint for that thing. Oh, and because now there's this new kind in Kubernetes, now I need a new watch.

And figuring out that dance of how to get all of that working is a pretty complex issue. We can gloss over that for now and just assume it works. The Gatekeeper process, the controller works fine. And it observes a new constraint. And it gets enforced.

There's still a problem here. And it's a twofold problem. One is webhooks, in terms of how they scale both for capacity and reliability, is a scale horizontally. Normally, if I want to, say, double my serving capacity or half my likelihood of having an outage-- speaking roughly, there's failure domains and all that-- I would double the amount of serving webhooks. And I'll have twice the compute capacity then and roughly half the probability of failure. I'm sure some statistician is saying like, not really. Fair, completely fair.

CRAIG BOX: I'll allow it.

MAX SMYTHE: Yeah, rule of thumb. On the other hand, controllers are singleton processes. They only really want to have one controller running at a time, because it's a bit simpler that way. The probability that two controllers will be fighting each other is less.

CRAIG BOX: And they're not on the request path?

MAX SMYTHE: Correct. They're not on the request path, so who cares if a controller goes down? There's limits. You probably don't want a controller down for hours. The SLO is lower, to use SRE speak. You can afford to, for instance, when you're rolling out, have that few seconds of down time it takes to take the controller down and have a new controller stand up.

Unfortunately, as I said, Gatekeeper, the process-- and it needs to be in process, because this controller is managing the in-memory state that the webhook is accessing to serve requests-- needs to be both a controller and a webhook at the same time. There's only so much time to talk about this. I do have a KubeCon talk at, I believe, the KubeCon US 2020 that goes into a bit more detail. But the long and short of it, Rita and I give the talk, I want to say. Sorry, Rita. She's a co-maintainer from Microsoft on Gatekeeper.

The thing with it is that, if you have careful design of the resources and how you're using them, you can mitigate those concerns. And that's a delicate dance. From an operational perspective, though, eventual consistency comes back to bite you. Policy is meant to be enforced. It's not there to make everyone feel good, although it does do that. It's meant to ensure the state of the system is where you want it to be at.

And how do you interpret in Kubernetes if you have a resource that is meant to enforce policy, and you have five webhook pods, and it gets added? Well, there's going to be some sequence of events where, first, one pod notices and ingests that policy, then two pods, then three pods. And depending on how you load balance, there's going to be some percentage, likelihood, of enforcement. We can say, if you have five pods, one of which hasn't yet ingested that constraint, you have roughly an 80% chance of that constraint being successfully enforced by the webhook.

So there's also some careful design in terms of high availability and the status reporting on the resource to be able to make conclusions about is this thing fully ingested, not just by one pod but by all pods, such that I can expect, barring bugs, that this policy will be enforced. And also, if I roll out an update to Policy Controller, is there a way I could do that such that, when I bring up this new pod, it doesn't have any state stored in it yet, that that pod doesn't cause spontaneous under-enforcement, which would also be an issue.

CRAIG BOX: All these things that you've described so far sound like we have a complete and comprehensive policy system in Gatekeeper. What's next for that project to do?

MAX SMYTHE: A lot. One thing that has landed by the wayside is the authorship experience and the consumption experience and, really, the documentation for all of this around how do I write constraints, how do I use other people's constraint templates. That sharing story is really core to avoiding administrator complexity. And it's underdeveloped. So that's an important area of focus for us.

Another place that we're looking at adding value is in mutation. There's two admission webhooks in Kubernetes, validating webhooks and mutating webhooks. Mutation's a really complex story, so we've been putting it off. We've started to think about how we want to address it. And we do have an alpha out.

And another area of focus for the project is what we're calling external data. So this would be things like integrating with, say, your company's LDAP server or a vulnerability database for Docker containers, for instance, and being able to make policies based off of that data.

CRAIG BOX: That's basically a flowchart that says, if got bugs, don't use it. Deny.

MAX SMYTHE: And you know how I love flowchart policies. They super-scale. It's amazing. The problem there is the scalability of the data. There's a lot of images. And whenever you're talking about interacting with another database, my operating assumption is that it probably, that entire data set, it won't live entirely in RAM on your webhook server. So the question is, how can you provide an extensible way to serve that data with reasonable latency for a webhook and without spending an unreasonable to maybe impossible amount on memory capacity?

CRAIG BOX: All right. Well, thank you very much for joining us today, Max.

MAX SMYTHE: Thank you for having me. This was a lot of fun.

CRAIG BOX: You can find Max on Twitter @MaxSmythe, with a y, and you can find links to Anthos Config Management and Gatekeeper in the show notes.

[MUSIC PLAYING]

CRAIG BOX: Well, that about wraps it up for this week, Jimmy. Time to go put the kettle on!

JIMMY MOORE: We'll watch those power spikes, Craig. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod or reach us by email, kubernetespodcast@google.com.

CRAIG BOX: You can also check out the website at kubernetespodcast.com, where you will find transcripts and show notes as well as links to subscribe.

JIMMY MOORE: We'll see you next week. So until then, thanks for listening.

[MUSIC PLAYING]