#208 September 21, 2023

History of containerd, with Phil Estes

Hosts: Abdel Sghiouar, Kaslin Fields

This week we explore the history of containers, particularly containerd, with Phil Estes.

Do you have something cool to share? Some questions? Let us know:

News of the week

Links from the post-interview chat

ABDEL SGHIOUAR: Hi, and welcome to the "Kubernetes" podcast from Google. I'm your host, Abdel Sghiouar.

KASLIN FIELDS: And I'm Kaslin Fields.


ABDEL SGHIOUAR: This week, we chatted with Phil Estes. Phil is a principal engineer at AWS and one of the core maintainers of containerd. He is also a member of the technical oversight board of the Open Container Initiative, OCI. We talk about the history of containers and Docker, the Dockershim and Kubernetes, the OCI project, and the story behind containerd.

KASLIN FIELDS: But first, let's get to the news. Notary Project announced a new major release of the Notary spec and the notation CLI in Go libraries. Notary specs version 1.0.0, and notation, notation-go, and notation-core-go version 1.0.0 are now available for use. Notary is a project which consists of tools and specs to help secure software supply chains, including signing, verifying, and managing keys and certificates.

ABDEL SGHIOUAR: Kubernetes legacy package repositories will be frozen on September 13 of this year. The community announced back in August the migration from Google-hosted package repositories to community-hosted ones. You are likely to be impacted if you deploy Kubernetes using any installer that relies on the legacy repositories, or if you use kubectl on Linux. Check the link in the show notes for more information.

KASLIN FIELDS: Gateway API version 0.8.0 introduced experimental support for service mesh. The Kubernetes gateway API is the next generation ingress API, which introduces modularity and native objects for traffic management. Today, Kuma 2.3, Linkerd 2.14, and Istio 1.16 are all fully conformant implementations of the gateway API service mesh.

ABDEL SGHIOUAR: Amazon announced VPC CNI support for Kubernetes Network Policy. With this new feature, you can use Amazon VPC CNI to implement pods and node network policies to secure traffic inside EKS.

KASLIN FIELDS: VMware announced a developer portal in Tanzu, based on Backstage. Backstage is a CNCF incubating project that started at Spotify.

ABDEL SGHIOUAR: Google Cloud next happened in San Francisco August 29 through 31. The event prominently featured Google Cloud customer stories and new feature launches, but the star of the show was AI. Google announced their new Duet tool for assisted work in a variety of products, including Google Workspace and Google Cloud. You can learn more in their blog posts and videos, which we will link in the show notes.

KASLIN FIELDS: The KubeCon and CloudNativeCon North America schedule is live. The event takes place on November 6 to 9 in Chicago, Illinois.

ABDEL SGHIOUAR: Danish startup Rig.dev raised 2 million euros in one of the largest pre-seed rounds in the country's history. The company develops an open source solution on top of Kubernetes, which helps developers build scalable backends and Cloud infrastructure.

KASLIN FIELDS: And that's the news.

ABDEL SGHIOUAR: Hello, everyone. My name is Abdel, and welcome to the "Kubernetes" podcast by Google. Today we are talking to Phil. Hi, Phil.

PHIL ESTES: Hey there. Good to be here.

ABDEL SGHIOUAR: Phil is a principal engineer at AWS, working on core container technologies. You are one of the maintainers of containerd, which is our topic for today. You are also a member of the technical oversight board of the Open Container Initiative, OCI, which we are also going to dive into. And you are a CNCF Ambassador, and overall, you are an awesome person.

PHIL ESTES: Awesome.

ABDEL SGHIOUAR: The first time we met was in Morocco, actually, at DevOps Morocco a long time ago. And we've been basically friends. And we just come across each other quite often.

PHIL ESTES: Yeah. And I'll have to say, I'm really bummed I haven't been to Morocco since pre-pandemic now. And it's way too long. But for those of you out there that like the DevOps events, there's a great one in Morocco. And I'm hoping to go back someday soon.

ABDEL SGHIOUAR: Yeah. It's the only DevOps outside of Europe for now.


ABDEL SGHIOUAR: So today we're just going to chat more about containerd and about you. So let's get started by actually talking about who you are. Who is Phil?

PHIL ESTES: Yeah. Who is Phil? The funny thing is I've been in tech forever, as you can tell by my hair color. It's like I'm not a young kid out of college anymore. So I worked a super long time behind the scenes internally. I worked for IBM for 26 years. And I wasn't really known, had no community involvement anywhere, worked on some cool stuff, worked on Java and Linux.

But it's coming up on 10 years, which is crazy to think that it's been long. But while I was still at IBM, I got more involved in Linux and Linux distros and working with Red Hat and Canonical, and SUSE, like our partners at IBM. Someone said, hey, this new technology is coming out. IBM had a fledgling cloud platform, mostly focused around PaaS, Cloud Foundry at the time. They're like, somebody needs to go look at this Docker stuff, because it's getting really popular. Phil, you know Linux. Why don't you go check it out?

And that was really a huge turning point in my career because I went from being totally unknown external to IBM to all of a sudden having connections everywhere, working on Docker, which, again, timing is everything. I got involved just at the moment that like it's exploding, and got really involved in open source at the time because I just loved working with the community that was involved. There were a great set of folks at Docker at the time. Superstars, really. Google was involved. Microsoft got involved. Red Hat, a bunch of other folks.

So it was an awesome time. And so the last almost 10 years-- that was like late 2013-- I've been fully involved in open source in a public way, which we're going to get into the OCI. And that came along soon after I got involved. And then during the pandemic, I had all these connections. And people were like, hey, now's the time-- I've been holed up in my little house here in Virginia forever, working from home. And I didn't want to work for the big guys because they're like, you have to be in an office.


PHIL ESTES: Anyway, so during the pandemic, long story short, I joined AWS, given their huge use of containerd and their container services. And so I've been here 2 and 1/2 years now.

ABDEL SGHIOUAR: Awesome. Yeah. I mean, you had 25 years at IBM. It's crazy to think about. I think you had as much time spent in the cloud space as my career, actually. I'm coming to 10 years at Google in two days from now. And I think overall I've been working for like 12 years or something. So obviously, I have a lot of background.

So before we dive into containerd as a topic, I mean, since I started doing public speaking, there was this thing that I hear people do or give as analogy. They say Docker is great, but it's not really revolutionary in the sense that it's only introduced like an abstraction layer on top of stuff that's already existed, right?


ABDEL SGHIOUAR: So Docker just repackaged some existing Linux things to make containers easier, because people had been doing containers before even Docker existed.


ABDEL SGHIOUAR: So have you been working on this like pre-Docker technologies, like the namespaces?

PHIL ESTES: Yeah. It's interesting. I don't think I heard people say the word containers till I heard about Docker. However, we have this build system at IBM for these custom distributions, repackaging of Linux components for embedded devices on IBM mainframes, and high end power servers. And we were building custom distros and running hundreds of builds a day.

To be honest, it wasn't my idea. There was a smarter kid who joined out of college. And he's like, have you guys ever heard of Chroot, this Unix API-- basically give yourself a private file system, namespace, so to speak. And so all our builds happened in Chroot environments, Chrooted environments. And so we were able to actually run isolated builds in the sense-- again, they weren't containers. We weren't using the PID namespace. We weren't using any of the other namespaces.

So I had this tiny little kind of awareness of these namespace concepts. And after Docker came to be, I met people in IBM, kernel hackers, who were using these namespaces for their own purposes pre-Docker. I personally hadn't experienced it myself, but there were people at IBM working on some of the initial namespace ideas, even 5 or 10 years before that, like cgroups and all those pieces.

ABDEL SGHIOUAR: Yeah, cgroups, namespaces, and chroots. I think that anybody who started their career around Linux at least have played around with chroots. That would be like one of the basics. I think cgroups and namespaces was probably more advanced if you are doing like probably kernel development or if you have some very specific use cases.

But I think what's interesting is that Docker managed to hide away the complexity of all that stuff behind one neat EPI and single, very simple to use command line, right?

PHIL ESTES: Yeah. When I first started giving talks on Docker, I would say the two things that I think made Docker so popular compared to the fact that people had assembled small tools to do things like it before was what you just said, that abstraction around like, I just type docker run, and something happens. And I don't have to know about it.

ABDEL SGHIOUAR: Yeah, I don't have to worry too much about it.

PHIL ESTES: The second thing was having an image repository, I think, was huge.


PHIL ESTES: All of a sudden it's like docker run alpine, or docker run redis, and it just starts. And you didn't have to think about installing packages, or dependencies, or all those pieces.

ABDEL SGHIOUAR: Yeah, yeah, yeah. It's kind of like the apt registry of Linux, essentially, but for Docker, where you can just piggyback on existing work. You don't have to figure out things from scratch yourself, right? I think also the reusability of Docker, the fact that you can take an existing image and just build on top of it.


ABDEL SGHIOUAR: So that's probably also the third thing that's made Docker. I think it's very interesting to see all the people that are using Docker today in its current format without really understanding where it came from and without having to worry, which is a fair point, until things go wrong.


ABDEL SGHIOUAR: Or until you have to dive into like, how can you isolate containers from each other? Which you're going to talk about this a little bit. But there is this popular quote that I have seen floating around. I don't know where it came from. But there is somebody who said, containers doesn't contain from like a security perspective, right?


ABDEL SGHIOUAR: And so I've seen people struggling with understanding. Oh, yeah, the containers are just processes, right? So let's get back a little bit. Containerd, Docker, what's the story behind them? Take us a little bit back. And imagine I am your grandchild by the fireplace.

PHIL ESTES: Yeah, it's always funny, because there's so much interesting history and much of it, as all the interesting stories in life, is not really technical. It's the fact that people were involved. And people got frustrated with other people. And relationships were interesting for a time. And so there's a lot of that behind it. And there's actually some great longer pieces that have been written about the history of that era.

And obviously, we don't have time to dig into all the details, but I think the key piece of information is that it's always interesting to tell people that Kubernetes and Docker started life almost the exact same time. Even if you hadn't heard about Kubernetes maybe till a little later, they were both being worked on. Google was developing Kubernetes 2013, 2014. Docker, same thing. And Kubernetes was consuming Docker as its runtime. That's the other thing that blows some people's minds. It's like, oh, I thought it was Kubernetes versus Docker. No, it's like Kubernetes was using Docker from day one, and using it as the executor of your pod containers.

In 2015, 2016, there became a lot of tension between those communities. And there's a lot that goes into Docker's personalities. Solomon Hykes has admitted that he rubbed a lot of people the wrong way with sort of strong opinions about Docker is not going to support Kubernetes directly. If you like it and want to use it, that's fine. But we're going to focus on Docker Swarm.

All those things just kept creating higher and higher levels of tension that many of us, including IBM-- I know they had a meeting with Google at the time. A lot of people were privately coming to Docker and saying, this tension has to be resolved in some way. There were technical pieces to those problems. One was that every time Kubernetes cut a release, and tried to upgrade Docker, something would break, because Docker had added features or changed the API in some way.


PHIL ESTES: And they didn't have official stability guarantees. And so one of the key things, if you look at the announcement we made in 2016 when we set containerd will be its own full runtime, one of the initial things we said is we're going to have a GRPC API stability guarantee. You're not going to have breaking changes in point releases. And so that was an answer to, again, one of these technical challenges of Kubernetes using Docker.

So anyway, again, taking all those bits and pieces, like tension in the community, technical issues using Docker, Docker having its own orchestrator engine in its binary used by Kubernetes, like, all of these things came to a head. And thankfully, people at Docker had already built this little piece called containerd.

And it's kind of funny that there's essentially two things you could call containerd. There's the first thing, which was just a process manager. It just orchestrated runC, essentially. It just told runC when to start a container and what the OCI config was. Then in 2016, we convinced Docker that if you donate that piece to a foundation, or put it in vendor neutral territory, open source, let's expand it to have image capabilities, push and pull to a registry, you have a snapshotter, the union file system handler for the runtime.

Docker had its own graph drivers. And so containerd was going to have this same technology called snapshotter. So it was late 2016 that we finally made that happen. So Docker basically spun out containerd. And at that point, runC, that same summer, was already part of the OCI. So all of a sudden, you had this thing that used to be a single binary is now three different open source projects. Docker uses containerd, which uses runC.

So yeah, all that took 18, 24 months to settle out. But by spring of 2017, at KubeCon in Berlin, we officially announced containerd being part of the CNCF. Again, runC was already part of the OCI. And you essentially had this multi-tiered runtime where Docker could still have Swarm. It could still have all their features. They could still move as fast as they want, but containerd could be used by Kubernetes and other use cases. And runC would be that sort of stable OS layer thing that would actually drive the container operations on the kernel.

ABDEL SGHIOUAR: Yeah. There is a lot to unpack there. I'd like to just go one point at a time. I think it was incredible for a lot of people to come to the realization how Docker and container and Kubernetes were intertwined, in terms of even the code base. And the realization, I think, came when the Dockershim was going to be deprecated by Kubernetes.

I mean, Kubernetes as an abstraction layer allows you to run your containers without really understanding what happens under the hood. And when you use a cloud provider, it's even worse, I guess, because you don't even know that there is Docker installed on the node. Or maybe you sh into the node, and you do like a Docker ps, then you see the images running.

I think that intertwined is the part you described which creates a tension between the Kubernetes, because these are two open source projects, worked on by two different communities that have to somehow talk to each other without having an API. Because the shim is not technically an API. So creating that stable API that you described suddenly allows Kubernetes to move as fast as they can, while still guaranteeing that the runtime will just work.

PHIL ESTES: Right. Right.

ABDEL SGHIOUAR: And then the other thing also is-- this is something that I did a talk like a few years ago-- one of the things that Docker did is allowed people to write microservices in an easy way. But Docker itself is not a microservice-based architecture. It's one big code base, and it's one big binary. It has multiple features-- so pull, push, run, orchestrate. But it's just under one big piece of software, which is quite interesting in the world of cloud native, because it's all about microservices, and decoupled codebases, and small things. And then, oh, yeah, we're running this 250-megabyte binary that does multiple things.

The history is very interesting for me because of understanding how these things evolved over time to understand where we came to today. But basically, today we are in a situation, essentially, where Kubernetes can call anything that supports CRI, which is the Container Runtime Interface. Where did this CRI come from? Was that the Kubernetes community?

PHIL ESTES: Yeah. So Kubernetes, the sig-node community came up with the CRI definition. Actually, people that you may know, Dawn Chen and her team was a big part of that at Google. In fact, that team, they were the first maintainers of the CRI plugin to containerd.

So when we first wrote containerd, there were even two open source projects. There was the CRI plugin and containerd, the core code base. And they had two different sets of maintainers. And so we worked very closely with them, but the Google GKE node team were the maintainers of the CRI plugin. They wrote the first version of it.

And then within a year-- first of all, anyone who's done a plugin that's really part of a binary, it becomes a painful development process, because it's like, oh, we need that fixed on the CRI. Merge that PR. Now revendor the CRI version into the containerd code base so we can make our next version.

So very quickly, we realized the CRI implementation in containerd is such a core part of the runtime. It should just be part of the code base. So we moved it inside containerd. And that team continued to help maintain it for many years. But yeah, it was part of the Kubernetes SIG Node. In fact, they still own the API definition for CRI.

ABDEL SGHIOUAR: Yeah. So Kubernetes went, we are going to do the CRI. Whatever container runtime has a plugin for the CRI, Kubernetes will run it.

PHIL ESTES: Yes, yes.

ABDEL SGHIOUAR: And then you have the other story about the shim, which is kind of like that part is still very fuzzy for me, because the shim technically still exists. It's not deprecated, right?

PHIL ESTES: Yeah. I mean, when the Sierra came about, there was still the Dockershim piece of the kubelet codebase.


PHIL ESTES: And so, has it been a year? I can't-- whatever Kubernetes release deprecated that shim, it was really the point at which that piece of the codebase was deleted.


PHIL ESTES: Now, what's happened since then is Mirantis, who, another long story we can't get into, owns the enterprise editions of Docker, and still has a customer set that pays them for that capability. They were like, well, we don't want Docker to not be able to play in the CRI universe. They're maintaining a CRI Dockershim that implements the real CRI.

So it does get a little confusing, because Dockershim-- I believe there was a PR that actually deleted that code from kubelet. So it's gone.

ABDEL SGHIOUAR: Yes, from the kubelet, yeah.

PHIL ESTES: So what you still have is, like you said, anyone who can make a CRI listener endpoint, and point the kubelet at it, you can become a runtime for Kubernetes. And so you can now do that again with Docker, but it's not via the Dockershim. It's via the CRI.

ABDEL SGHIOUAR: The CRI directly, yeah. So let's talk OCI, the Open Container Initiative. What is that? Is that an official organization or is it a foundation?

PHIL ESTES: Yes. So it's really a sister foundation to CNCF that was founded within a month of each other. OCI was formed, and then I think a month later, the CNCF was formed. They kind of had two different missions, which is why they're separate. The OCI was meant to handle yet another bone of contention in the container world. I'm sure you remember CoreOS.


PHIL ESTES: So CoreOS came out with rkt as an alternative to Docker very early on in Docker's history. And in fact, it was a little bit more ugly because they announced rkt and a specification for containers, I think, on day one of DockerCon Amsterdam in like December, 2013, or something. So it was like-- or maybe it was December, 2014. Anyway, it was, like, on that day, they made all their press releases, like the day Docker was trying to make big news, early, early DockerCon, like a couple hundred people.


KASLIN FIELDS: So there was, again, tensions, because CoreOS was saying, Docker is too closed. We're saying, here's a spec for how you run containers. And it's an open specification. Some folks at Red Hat made commits to it, like it had multiple vendors involved. I think they called it the ACI, the Application Container Interface, or something.

So the OCI was a direct response to, we don't want to see this industry having two different container specs. Even though Docker never had a formal spec, the fact that CoreOS was saying, here's the spec for containers, and it was different than Docker made a lot of people nervous. And all the big players were very nervous and approached the Linux Foundation, and got Docker and CoreOS's CTO, Brandon Phillips, to talk together and say, OK, can we all agree that we need a common spec for containers?

And so the OCI was that body, tasked with, take what CoreOS has done. Here's, obviously, the Docker codebase and how it decides what a container looks like. And make one single runtime specification for containers. And then, of course, the image spec is like married to that, because the image spec is, how do I package the rest of it? Because it's one thing to have a config that says, hey, use these namespaces. Run this binary. Here's the environment.

But then you've got all these image layers, and you need to somehow have a standard about how they're packaged and how this manifest references all these pieces. So the runtime and image spec were the first outputs of the OCI to standardize the world, so to speak, around what a container is and how you package it.

ABDEL SGHIOUAR: Yeah. And that's essentially the key to how today you're able to build a container using Docker, but just run it using containerd, because they both follow the OCI standard, which is the standard spec for images, for the runtime, for pretty much everything containers, essentially. Is rkt still a thing. Does rkt still exists as a container runtime?

PHIL ESTES: No, they had-- I'm trying to think of the name of the company in Berlin that maintained it. So CoreOS, obviously, got bought by Red Hat.

ABDEL SGHIOUAR: By Red Hat, yeah.

PHIL ESTES: But Kinvolk-- I don't know if you know anybody from Kinvolk, who then got bought by Microsoft. So they're all Microsoft employees now. Anyway, Kinvolk was paid by CoreOS to keep maintaining it for a number of years, even though the number of users was dwindling. And it was a CNCF project that was brought in at the same time as containerd. It was archived, formerly archived by the CNCF TOC a couple of years ago. So it's not an active project at this point.

ABDEL SGHIOUAR: OK. I bet there are some people. It's still somewhere running.

PHIL ESTES: Yeah, sure. I'm sure some people--

ABDEL SGHIOUAR: Still use it.

PHIL ESTES: I think the funny thing is there was a European car rental company that had built around it, and Kinvolk was supporting it on their behalf.


PHIL ESTES: BlaBlaCar, BlaBlaCar.

ABDEL SGHIOUAR: Ah, BlaBlaCar, yes. Yeah, yeah, the car sharing app.

PHIL ESTES: Yeah, yeah. That's the only well-known customer I ever knew using rkt.

ABDEL SGHIOUAR: Oh, wow. I didn't know that. I know BlaBlaCar because they're customers of ours. And I work with them quite a lot.

PHIL ESTES: Yeah, yeah, yeah.

ABDEL SGHIOUAR: This is the thing about technology in general is things get deprecated, and they get archived, and then you go somewhere. You meet somebody. They're like, yeah, we're still using that thing that no one maintains anymore, right?

PHIL ESTES: Right, right.

ABDEL SGHIOUAR: I think my favorite story was a few years back when I was working in Belgium. One of my colleagues' girlfriend-- they live in France, and the French government, they got to a point where they ran out of COBOL developers. So they couldn't hire COBOL developers. So they actually created a training program through one of these unemployment offices called [FRENCH], which is like a French thing, where they ask people, even if you don't have a tech background, just come in, and we'll train you to become a COBOL developer so that banks can hire them. And this was like 2016 or something. It was crazy.

So we talked about the containerd, the story. We talked about OCI. So containerd, so containerd is a runtime, right?


ABDEL SGHIOUAR: Which uses runC under the hood.


ABDEL SGHIOUAR: Yeah, just tell us more about it.

PHIL ESTES: Yeah. So that is where terminology does get a little bit interesting, because we also have the concept of a shim in containerd. It's unrelated to Dockershim and the Kubernetes interface. But the reason we designed containerd this way-- I think it was a topic you wanted to get into anyway-- is we didn't want the overall containerd API, containers and all the resources around them, images, to only be able to have a hard linkage to only drive runC, because there's Red Hat has crun written in C, like it's an OCI compatible runC-like binary that they use with openshift.

And then there are people who said, well, I don't want just to have Linux kernel isolation. I want to use lightweight VMs. I want to use, back in the day, Hyper.sh, which became Kata Containers, or Google gVisor. And so containerd is very much a service-based architecture, where we have unique gRPC services for runtime, for image, for container resources, the metadata about a container, its labels. So all these are different services in containerd, each one having its own gRPC API.

The reason being is that you can use pieces of containerd without using all of it. You could build yourself your own higher layer. You could say, hey, I don't like Docker. I want to build my own thing. But I don't want to rewrite all these capabilities. containerd has all of them. So we wanted to make containerd very embeddable by higher layer components.

ABDEL SGHIOUAR: Yeah, it's composable, essentially.

PHIL ESTES: Yeah. So above that is all these services. And then you have a shim layer that says, well, when I want to create a task, by default we'll use the runC shim. But you could say, please, start my container with the gVisor shim, or please start it with the Kata Container shim. And therefore, you get those levels of isolation without having to change anything about how you call the containerd API.

ABDEL SGHIOUAR: So you are basically decoupling how you consume the API from how the underlying process is started, and managed, and orchestrated.


ABDEL SGHIOUAR: And that actually brings me to the question that we have. When I talked to the Docker CTO, the conversation about Wasm, and he was like, yeah, we're using containerd, because containerd is a generic runtime tool. So I mean, I did some digging to understand how it works. Basically, they wrote a shim for a Wasm runtime that containerd can call, right?

PHIL ESTES: Yes, exactly. So just like there's a gVisor shim, there's now the Wasm shim. And if you drive containerd via an API or via the command line, and specify the Wasm shim, then, obviously, there's some tricks you have to do, because there's no real standard about how Wasm images are packaged. what does it look like? They're basically artifacts. They're OCI artifacts. The Wasm shim knows how to say, oh, OK, well, I know that the Wasm binary blob is here. And I can now run that with WasmEdge or whichever runtime they support.

ABDEL SGHIOUAR: So it's actually quite interesting that all these projects had their history, which we talked about, and the tension between the communities. And then at the end of the day, if you are doing Wasm inside Docker, you are still using containerd and still using not runC, whatever Docker wants for Wasm.

And I actually was reading one of your articles as I was preparing for this, one of your articles on your blog, which is resolving conflicts. And as you are talking, that article keeps coming to mind, that technology and people are two pieces that influence each other very heavily. And I think there is no better time to think about this than what's happening right now with HashiCorp and Terraform.

PHIL ESTES: Oh, yeah. One of the interesting things is how many people from that original Docker community are still involved in some way. And I still chat with them on Slack about various things. 10 years have passed, and at points in those 10 years, there was disagreement, or debate, or maybe even frustration with different people. But it's amazing how many of them are still working together today, somewhere in the Kubernetes container, containerd universe. And we're still all friends, for the most part. And like you said, dealing with the complexity of different vendors, and different employers, and different initiatives. But yet, we're all basically still working together.

ABDEL SGHIOUAR: Yeah. It's also the affiliation of the vendor that you're working for, and what their interests are. And I think that's one of the reasons why I think something like the CNCF, despite what people say about the CNCF as an organization, having a vendor neutral place where a project is pretty important for this whole cloud native world that we're in.

PHIL ESTES: Yeah, definitely.

ABDEL SGHIOUAR: Technically, today, I think there are three main runtimes. There is Docker, which still uses containerd under the hood, or you can just use containerd directly, and then you have CRI-O, right?


ABDEL SGHIOUAR: So give us the rundown. What's the difference between these three things?

PHIL ESTES: Docker and containerd are a little more interesting to try and differentiate, because, like you said, one kind of consumes the other. What I think is probably the main differentiator is that containerd has way less opinions and support for very specific Docker things. Docker has health checks. Docker has volume drivers. I don't know if they support CNI drivers, but they've had networking drivers for many years now.

Containerd, if you look at our project page, at our scope.md document. So the markdown of our scope. There's like a bunch of things that are out of scope, and we've never had built-in networking support. You have to configure a CNI driver to handle networking, which aligns well with, obviously, use by Kubernetes.

So containerd has always maintained that we are this core of API-driven capabilities that are way less opinionated than higher layers which use us. So in a sense, Kubernetes using containerd adds a bunch of opinions. You need a pod spec.

ABDEL SGHIOUAR: Yeah, of course.

PHIL ESTES: You need to use the Kubernetes API, which is, of course, naturally what it should be. Same thing with Docker. When Docker uses containerd, it builds its own constructs over top of containerd's capabilities. So things like health checks and volume drivers and networking support, that's all in the Docker codebase above containerd. And containerd doesn't have to know about it.

So you can almost think of containerd as sort of the core of container runtime without any of the sort of nice user layer features. And we just expect Docker or Kubernetes to provide that for you. We don't expect a ton of people to want to use containerd on their laptop, for example.

Now, the interesting thing is that people have. And they're like, oh, well, let's build a Docker compatible command line on top of containerd. And so now you have nerdctl. And Buildkit uses containerd. So you can almost rebuild a Docker-like runtime out of these components. So that's kind of the one side. Docker and containerd are similar in a lot of ways, but one builds on the other.

CRI-O came out of that contention around the Docker codebase. And Red Hat, there's a much longer story there with things that I think are still not able to even be talked about with legal stuff between Docker and Red Hat. But I'll say my piece of it was Vincent Batts and I have been friends for a very long time. He was at Red Hat. I was at IBM at the time.

And the discussion I was having with him is like, I think we're going to work this out. We're going to make containerd the neutral, open source piece that you guys need at Red Hat-- Kubernetes community needs. But I think they had so much pressure internally to answer this question. What are we going to do if Docker doesn't let us package Docker and rel, if there's actually a legal injunction or something that-- if we totally break relationship with Docker--

ABDEL SGHIOUAR: What happens?

PHIL ESTES: Yeah, what happens? So they had a group of engineers, who basically-- you'd have to verify with them. But over a weekend or a very short period of time, they threw together CRI-O as this little process manager piece. They took the graph driver piece of Docker as the kind of snapshotter layer, and then they wrote a implementation of the image spec as their image layer.

And you can go find these on GitHub as github.com/container/image, and github.com/container/storage. So it's almost like they wrote two libraries. And out of that, they built CRI-O. They built Podman. They built Skopeo.

And again, if you think about it, one of their big contentions with Docker is like, well, Docker is this engine that's always running as a daemon on your system. And so when you run docker run, it talks to the daemon, and it does some work, when you type docker build. And that's not like the old school Unix style of how programs run on Unix.

They're like, we want these libraries, and small programs that do one thing that call the library to do the work. And so Podman, CRI-O all build on that same idea. And there's a senior distinguished engineer who was involved in the Docker community, who was always making these arguments in the Docker community. And I think the end result of what you see from Red Hat today is just his vision finally realized. What he wanted Docker to become, he finally just convinced his teams to write internally. And so that's why CRI-O, Podman, Buildah, Skopeo all exist kind of in their own architecture, because they're all built around these two core libraries that Red Hat built.

ABDEL SGHIOUAR: This goes back to the initial conversation about the fact that Docker itself is not composable.


ABDEL SGHIOUAR: So it is actually very interesting. I was in the middle of the period when Google was migrating away from-- when we were turning on containerd as the default runtime in UK. And there was quite a lot of explaining that we had to do to customize. And I think what was interesting is a lot of times, you talk to people that don't realize that when you have GKE with Docker, well, technically, you have the full Docker on the node. So you can SSH, and just build an image on the node.

You have all these pieces of components you don't actually need if you just need the node to just run the containers. So that goes back to the thing of Docker versus containerd. The way I see it, it's like Docker is more a developer tool. Containerd is what you put in production with your Kubernetes cluster. That's it.

PHIL ESTES: Yeah, yeah.

ABDEL SGHIOUAR: And then CRI-O, I did not know about CRI-O, the history. Thank you. I did not know about the history behind it. Obviously, yes, so I think what you brought up as a topic is quite interesting. And that's what we are seeing today. It's like when you are relying on somebody else's piece of code, and you are repackaging their software, what happened if they just suddenly say we're changing the license, right?

PHIL ESTES: Yeah, yeah.

ABDEL SGHIOUAR: The way I see it going forward in the open source space is we're going to see more and more of this. We're going to see more and more companies that depend on software that is owned by other companies going an alternative path, just from the fear that what if they change their mind.

PHIL ESTES: And like you said earlier, there's a ton of debate about the value of the CNCF. That's been argued about not just with CNCF, but all kinds of other foundations. And the truth is foundations are imperfect, just like people are. There are good and bad sides to some of these foundations.

But I'm sure you know this from Google's customers. There are customers that really value the fact that our services are built around CNCF open source projects, because there's at least a little bit of comfort in knowing the CNCF is holding the IP, the license agreements, the trademarks in a vendor neutral space that isn't subject to the whims of like some CEO waking up and saying, I want to do something totally different with our code or with our project.

And so in that way, foundations do give people some safety. I mean, I know at IBM, we had customers saying, we only want to consume services that we know are using the graduated CNCF projects, because we believe in the viability of that project, the long-term support, et cetera.

ABDEL SGHIOUAR: I would add to that the CNCF also helps keeping a tab on all the vendors, right? The whole governance model makes sure that the project does not evolve in such a way that it favors one vendor versus another, that there is agreement on how we progress and how we move things forward.

PHIL ESTES: Sure, sure.

ABDEL SGHIOUAR: So the conversation of containers versus microvms-- and this will lead us to the security part of this conversation. So you have Kata Containers and Firecrackers. And can you help us wrap the head around containers versus microvms?

PHIL ESTES: I think you had mentioned this to me, that you had heard this phrase. And I've heard it attributed to different people. But containers don't contain. I'm pretty sure someone made T-shirts at some point.

ABDEL SGHIOUAR: I'm sure they did.

PHIL ESTES: I used to attribute it to Jess Frazelle because I think she said that in a talk early on in the Docker days. So anyway, it could be her. It could be someone else. But as you know, like every piece of software has CVEs at some point, has security vulnerabilities. And I think the perception around containers, especially 10 years ago, was that like, here's a bunch of code in the Linux kernel that most people have not used. Some people have. But how many people have explored every edge case of starting a PID namespace, or a user namespace, or the mount namespace? How many holes could there be? How many like edge cases could there be where if there's a weakness here, all of a sudden, I have root privilege outside of this mount namespace, and I control the whole node?

And we've seen that. There's been, I think, at least three well-known, critical escapes in containers in the last 10 years. And there's been a ton more CVEs, but a lot of them are much more minor, or protected by AppArmor, or SELinux, or good defense in-depth posture would have saved you from those escapes.

Regardless, as you know, as you move up in the stack of regulated industries, financial sector, these people do not want to be told like, oh, by the way, it could happen that if you get out of this, you'll be root. And it's newer technology. Well, at least 10 years seems like a long time, but VMs have been around for several decades now.

So anyway, all that to say that the trust boundary around containers, first of all, because every piece of code, as you know, that can run in a container is calling the same kernel instance as every other container on that node, you have to be really, really sure that there's not escapes across that very vast threat model of the entire syscall interface to Linux. Because everybody's making syscalls, every container. Even though there's these nice isolation primitives we have.

The VM boundary is much more narrow. It's like you boot a kernel on top of this virtualized hardware platform, and at least the belief has been, unless you open a lot of other holes, like I know SR-IOV, and all these high-speed interconnects, where you take a memory space so that you can do really fast inside VM to outside VM transfers for IO. If you keep that surface area small, VMs, at least on paper, are much more secure primitives than containers. So I think that's the basic argument about containers don't contain. It's that it just doesn't have that kind of isolation that you can necessarily rely on as being almost ironclad.

So yeah, that, I think, has been the argument about why not everyone is certain that containers are the best way to have isolation, especially between tenants, thinking about Pepsi and Coke on the same server. I'll tell you for sure, AWS will never do that.

ABDEL SGHIOUAR: Yeah, yeah. I mean, it's interesting that those technologies exist, like Kata Containers from it. Kata Containers is not AWS, but is it Microsoft now, right?

PHIL ESTES: Yeah, parts of it, because Kinvolk-- am I right Kinvolk has been working on Kata? Intel is actually very involved in Kata because they had Intel clear containers. And there was a Chinese vendor, hyper.sh, that actually had a public SaaS cloud for a while, offering their capabilities.

Those two entities came together to create Kata Containers underneath the Open Infrastructure Foundation, which is where openstack lives today. So Kata is sort of a little bit unique. It's not part of the CNCF. It's part of this Open Infra Foundation.

ABDEL SGHIOUAR: And then you have also the AWS Firecracker, which is what AWS used to use lambda on for, pretty much. So I think that these two technologies, the way I see it, is it's not something that your end user developer will have to worry about too much.


ABDEL SGHIOUAR: If you are running single tenant clusters or single tenant basically anything, you wouldn't have to worry about it. I knew Kata containers and knew Firecracker because I read articles about them. But I think the first time I had a realization about, hmm, that's a very valid use case was, last year at KubeCon in Amsterdam, there was the Cloud Native Rejekts the weekend before.

PHIL ESTES: Yeah, right.

ABDEL SGHIOUAR: And I was there. And the maintainer of OpenFaaS, Alex Ellis, he did a demo about building ARM apps on real ARM servers. So not emulated with Firecracker to guarantee the installation rights. He was building basically a PaaS, a Platform as a Service, where he would share a physical arm server between multiple tenants to build apps because it's faster, But then using Kata Containers is a way to guarantee that isolation, right?


ABDEL SGHIOUAR: So it was like the first time had this little light bulb in my head. It was like, hmm, that's what the thing can do. The way I see it is basically it's lighter than a VM, but has better insulation than containers.

PHIL ESTES: Yeah, exactly.

ABDEL SGHIOUAR: Does containerd have any support for any of the microvms?

PHIL ESTES: Yeah. So there are, again, two shims written for microvm isolation. One was written by AWS and is an open source project, Firecracker containerd. So inside the Firecracker containerd project is a shim similar to what we talked about earlier. You can drive containerd, say, I want to isolate with Firecracker, and you get that isolation automatically.

The Kata Containers community has built a shim for Kata very similar. It does the same thing. You run containers via containerd. You specify the Kata shim. And you get Kata VM isolation on the back end.


KASLIN FIELDS: And gVisor has a slightly different design, but it can also use VM isolation, lightweight VM isolation through their shim as well.

ABDEL SGHIOUAR: It's an interesting space, I think, to keep an eye on. I think that as a community, we don't know what we don't know, and we don't know when the next CVE containerscape thingy is going to happen, right?


ABDEL SGHIOUAR: And the more of those happening, the more people will realize the quotes that we have to figure out who to attribute to, that container doesn't contain, right?


ABDEL SGHIOUAR: Well, cool. I mean, I had a blast. That was an awesome conversation. I learned a lot, especially the history of CRI-O. I didn't know that.

PHIL ESTES: Yeah, yeah, yeah.

ABDEL SGHIOUAR: So I'd say thank you very much, Phil. It was a pleasure talking to you.

PHIL ESTES: Yeah, this was awesome. Thanks for having me on. Sadly, I have my camera at the wrong angle to see my containerd shirt today.

ABDEL SGHIOUAR: The podcast is audio only for now.

PHIL ESTES: Oh, OK. Yeah, yeah.

ABDEL SGHIOUAR: I can assure the people that he's wearing-- I have a similar T-shirt. [LAUGHS]

PHIL ESTES: Yeah, yeah.

ABDEL SGHIOUAR: But we're planning to turn it to video at some point, because we received a lot of feedback about people wanting it in video format.

PHIL ESTES: Yeah, no, awesome. It was great to be here, and great to chat with you.

ABDEL SGHIOUAR: Cool. Thank you very much.

PHIL ESTES: Yep, thanks.

KASLIN FIELDS: All right, thank you so much for that interview, Abdel. I was really excited that we got to talk to Phil. Phil has been such an important figure in the community for such a long time. And he brought that to bear here with such an interesting history of containers and kind of Kubernetes, but really kind of sticking to the container space, which I haven't thought about in a number of years. I haven't really dove back into how the runtimes work, and kernels, and things like that. And it was really refreshing and exciting to hear about all of that again.

ABDEL SGHIOUAR: Yeah. So it really felt to me like sitting by the fireside, having a drink, and just having a chat with somebody who knows so much. I could be doing that for hours. He has been involved in a lot of things over the years. I mean, it's 25 years just in IBM, and then a few years at AWS, starting from Linux, and old school mainframes, and stuff like that. And so, yeah, it's a lot of history.

KASLIN FIELDS: Yeah. He really did dive into the history there with all kinds of different things. I thought it was interesting when you talked about the beginnings of him being involved with container stuff, and how he didn't really work on container stuff before. It was like we need someone to work on this. And that's how it became Phil. I feel like a lot of people who got involved with the space, that was kind of how it started. [LAUGHS]

ABDEL SGHIOUAR: Yeah. And I guess also it requires a little bit of prior knowledge, which Phil has, because he has been working on Linux-related topics, which, I mean, effectively, containers as we know them today-- I mean, they existed way before. Obviously, Docker made containers popular. But as we know them today, they are just kind of like an abstraction layer on top of existing Linux primitives, right?

KASLIN FIELDS: Right. And y'all mentioned the-- one thing that you talked about a lot was container security and how containers don't really contain. [LAUGHS]


KASLIN FIELDS: And he mentioned that he thought that that might have come from Jessie Frazelle Jess Frazelle, who has been such a big figure in that space as well. And it was one of the first people that I learned about containers from. So it's kind of a blast from the past, hearing about all of these terms and concepts around containers that I haven't heard in a while.

ABDEL SGHIOUAR: Yeah. I actually was lucky. I met Jessie once, and Phil, of course. Phil and Jessie and a bunch of other people were invited by CERN back in 2019 for a mini container summit in Geneva, in Switzerland. And Phil invited me. So I was very lucky to know Phil, because I got to visit CERN and visit the colliders.

And it was so interesting because I was sitting in a room with Jessie, and Phil, and a bunch of people, and these folks from CERN, including Ricardo Rocha-- I think you know who he is.

KASLIN FIELDS: Yeah, I know who you're talking about.

ABDEL SGHIOUAR: He was on the KubeCon chair for a while. The week after was KubeCon 2019 in Spain, I guess-- no, 2018, Barcelona. I met this person. We chatted for like a day, then I went back home-- never thought about it-- flew to Barcelona. I was at the keynote of KubeCon Barcelona, and Ricardo was on stage, talking about using Kubernetes for the Higgs boson experiments.

KASLIN FIELDS: Oh, that was such an important keynote. They even redid it later.

ABDEL SGHIOUAR: You remember that one? Yes, yeah. [LAUGHS] So I was like, I know this person. I was with them in a room in Geneva like a week ago.

KASLIN FIELDS: Yeah. That's a really good talk.

ABDEL SGHIOUAR: A very, very humble person, very humble.

KASLIN FIELDS: We should definitely link that in the show notes for anyone who hasn't watched it. It's very interesting to hear how they discovered the Higgs boson, and later they did a redo of it to show how much faster it would be with how we run things in Kubernetes today. Very interesting.

ABDEL SGHIOUAR: Well, I think Ricardo was also on the show. We'll find the episode and have a link for it.

KASLIN FIELDS: Yeah, that'll be great. And one of the first things that I thought of when he mentioned Jessie was, I think, many years ago at this point, she created this project of this container that she challenged people to break out of. And I don't think anyone ever broke out of it. I'll have to try and see if I can find that, too. [LAUGHS]

ABDEL SGHIOUAR: Oh, I was not aware of that one. That would be interesting to listen to.

KASLIN FIELDS: Yeah. Some of my friends at the time were very interested in it, because it was a very interesting example of how containers could contain implemented by someone who knows the technology really, really well.

ABDEL SGHIOUAR: Yeah, exactly. Isn't Jessie who had this talk about building containers from scratch? Or is it somebody else?


ABDEL SGHIOUAR: No, maybe not Jessie. Maybe somebody else.

KASLIN FIELDS: That was-- Liz Rice did that.

ABDEL SGHIOUAR: Yes! Yes, yes, yes, yes, yes.

KASLIN FIELDS: But also, we can link that, because that's a really good talk as well. Several people at the time, when Docker started to become popular, were like, it's not magic. Here's how it works under the hood. So there's probably several of those.

ABDEL SGHIOUAR: There's probably tons of those. But I remember the one from Liz was my favorite, because it was very easy to follow.

KASLIN FIELDS: Another thing I really liked in this was breaking out how containers, like the pieces that make containers, work. I didn't fully understand the history. I don't know that I still do. But I at least understand it a little bit better now, the history of there was all this Linux stuff, obviously, that a lot of it I've heard Google created back in the day because of the ways that we used containers in our own data centers.


KASLIN FIELDS: Cgroups and namespaces, I think we created at least one of those. Anyway, but beyond that, how Docker took it, and made it usable. I always talk about that. Docker is a really great usability tool that made it popular to use containers, because it made it easier to use them, which you talked about quite a bit.

And then how that kind of got split up into containerd and runC. That part I didn't realize. It was all kind of part of Docker, and then got split up into these separate open source projects.

ABDEL SGHIOUAR: Yeah. And like one of the things I like also the discussion with Phil is the fact that he would always go back to the human factor in any of these kind of decisions.

KASLIN FIELDS: He does, which is great.

ABDEL SGHIOUAR: Right. It's always people that don't agree and do their own things on the side. So yeah, the history there is, I think, people would have listened to the episode at this stage, but basically, it's a bunch of people that were like, OK, we're just going to go do our little thing on the side, and then other people who go like, OK, we're going to go write a standard-- so that's where the OCI came from-- to make sure that everybody builds toward the same set of specs.

KASLIN FIELDS: There was and still is so much excitement in this space. Like he was saying, a lot of the folks who started working in the container space back when it really first started to become popular still work together today, even if they're not at the same companies, because the people who work in this space just really love the technology, and the community, and kind of all of the ecosystem around it. It's amazing to see.

ABDEL SGHIOUAR: Yeah. It's actually interesting. I'm coming from the Linux space a long time ago. And one thing I say all the time is that open source, back in the Linux days, is quite different than open source in today's cloud native world. It's more diverse. It's more people. And there's also more vendors involved.

Back in the days, one person would just work for one company for probably their entire career, working on one package, on one set of packages-- maybe not very welcoming from a contribution perspective, or at least if you are like a really good developer, then your contribution will be welcome. I guess I don't have to remind people how Linus Torvalds was at some point.

But yeah, it's quite different. As you said, people care more about the technology than where their paycheck is coming from. And regardless of where they are, they will just continue pushing forward the projects that they are maintaining.

KASLIN FIELDS: So splitting out containerd and runC really enabled that is--


KASLIN FIELDS: --part of what I got from this, because the whole point of containerd and runC, too, is just to make this whole container thing more modular so that people can do whatever they want with it. And we still see that today.

And Wasm was one of the things that you all talked about, because it's kind of this continuation of the way that the community split up the functionality of containers into containerd and runC. And I didn't know about the whole kind of the boundaries of where containerd is, and where runC is, and where all of those pieces are, and that Wasm is kind of this new way of using some of the pieces of containerd. So it's like reusing some of the old things, but doing some of the new things. It's just so interesting to see the space continue.

ABDEL SGHIOUAR: I think the composability or the fact that containerd is a set of components makes it easy to-- we talked about this. It makes it easy to actually just use only the pieces you need. The example you're giving is Wasm. So basically, instead of using runC to run a container, you just write your own shim, which is very interesting, because we talked about Dockershim as people knew it. Now, these new connectors, if you want, they're also called shims, which is very confusing.

KASLIN FIELDS: Yeah. I'm like, are we going to end up in another shim situation?

ABDEL SGHIOUAR: Yeah, we're going to end up in a very shim situation, I guess.


So yeah, you can basically write your own shim to just run whatever, whether that's like Wasm or Kata Containers-- God knows what in the future. I like the fact that Phil was very insistent on saying containerd is agnostic. It's just a runtime system that has an API. The major focus is making the APIs not break, clients. Then how does containerd itself run whatever it's supposed to be running, that's up to the shim to decide.

KASLIN FIELDS: The thing with the shims that I'm curious about is-- like with Dockershim, now Dockershim is out of tree, essentially. It was in tree, meaning the Kubernetes community had to maintain it, and that caused all sorts of problems in open source. And that's part of why it's not in tree anymore. But it's still being maintained out of tree. It just means that if you want to use it, you have to go and get it from Mirantis.

So the same thing is true with Wasm right now is there's this shim that you can use with containerd in order to be able to use Wasm within Kubernetes clusters. And it's out of tree right now. So you'd have to add it to your cluster in order to be able to use Wasm is my understanding--


KASLIN FIELDS: --when I try to do this. [LAUGHS] And I'm wondering how that's going to go on. Is that going to be a thing? Are shims going to be out of tree? Is Wasm going to become such a big thing that the community decides to put it in tree again? I'm very curious what will happen with this. I don't know.

ABDEL SGHIOUAR: I think we'll have to see. I mean, all the other shims, technically, are out of tree, right?


ABDEL SGHIOUAR: All the shims for the Kata Containers is also out of tree. gVisor is out of tree. But the thing that follows the philosophy of Kubernetes, if you think the storage part, where you have the interface, the CSI, and then the drivers are all out of the tree. They're not part of the Kubernetes codebase.

KASLIN FIELDS: Which also used to be in tree, which we just had the episode about 1.28, and the deprecations and removals in 1.28 were basically just removing storage drivers from being in tree to out of tree, because that's how we do storage drivers now.

ABDEL SGHIOUAR: And that makes, again, Kubernetes agnostic. It doesn't really care about what your storage driver is. It's up to you to install the plugin you need for the storage thingy that you need to use.

KASLIN FIELDS: Yeah. But if you do want to use it, then that means extra steps for you, a tradeoff between bloating Kubernetes itself and enabling people to do what they want to do as easily as possible.

ABDEL SGHIOUAR: Yeah. The tradeoff between ease of use and maintainability for the community.

KASLIN FIELDS: Yeah. So things being out of tree is very interesting and important. We'll see how that continues with things like Wasm and as we get more and more shims.

ABDEL SGHIOUAR: Yeah. And whatever else is coming in a few years from now, I guess.

KASLIN FIELDS: Yeah. I'll be interested to see where container technology continues to go. I can't believe we're still talking about this 10 years later. [LAUGHS]


KASLIN FIELDS: It's great!

ABDEL SGHIOUAR: I think Phil mentioned this. So much things have happened in 10 years that people don't realize. It's really fast. VMs have been around for a very, very long time. This container space is relatively new compared to the age of some other technologies. But it's moving so fast.

KASLIN FIELDS: Yeah. I also really liked the way that he described the people's hesitation around the security boundary of containers. It's just not as well understood, not as well known. People just don't understand the kernel that well. It's a very deep thing to understand. And so that's how I've always kind of explained the security difference between VMs and containers. It's like containers are just a lot newer. People just don't understand that security boundary quite as well. They're very comfortable with the VM security boundary. So sometimes you use one. Sometimes you use the other.

ABDEL SGHIOUAR: I mean, that's where Kata Containers and Firecracker comes into space. If you still want a container, but you want the strong isolation, strong between codes isolation. I mean, I think that that's the beauty of this whole open source space and cloud native space is you can use whatever works for you. There is no one golden path that just works or solves all the problems.

KASLIN FIELDS: If you can find it.

ABDEL SGHIOUAR: If you can-- well, that's a different problem.


KASLIN FIELDS: Yeah. Which we're trying to help with a little bit here, talking about some of the technologies in this space.

ABDEL SGHIOUAR: Yeah, that's the whole point of the show, right?

KASLIN FIELDS: Yeah. Speaking of which, thanks for the conversation, Abdel. We hope everyone joining has enjoyed it today.

ABDEL SGHIOUAR: I hope so. Please, let us know what do you think. We're trying to bring people like Phil on the show, who have been around and know stuff.


ABDEL SGHIOUAR: Hopefully, we'll bring more people like him.

KASLIN FIELDS: Tell us what you want to hear.


KASLIN FIELDS: We'll see you next time.


KASLIN FIELDS: That brings us to the end of another episode. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter, @KubernetesPod, or reach us by email at <KubernetesPodcast@google.com>. You can also check out the website at KubernetesPodcast.com, where you'll find transcripts and show notes, and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.