#69 September 3, 2019

kind, with Ben Elder

Hosts: Craig Box, Adam Glick

kind stands for Kubernetes in Docker. Originally built for continuous integration (CI) and testing of Kubernetes itself, kind has found many uses, including acting as a cluster for bootstrapping other clusters. Original author Ben Elder from Google Cloud joins Craig and Adam to talk about it.

Want to see Adam’s puzzles? Let us know:

Chatter of the week

News of the week

ADAM GLICK: Hi. And welcome to the "Kubernetes Podcast" from Google. I'm Adam Glick.

CRAIG BOX: And I'm Craig Box.


ADAM GLICK: This week, feeling inspired by some of the puzzle things that they have around some of the buildings at Google, I decided to start making some puzzles.

CRAIG BOX: Oh, yes.

ADAM GLICK: And it's an interesting process to go through to try and-- it's one thing to solve a puzzle, because someone's kind of guiding you down a path. It's a really interesting piece. But to go and create that and try and think of, OK. What will people do? What will get them stuck?

It's been a fun process. I've got a few of them written. And I'm actually looking for play testers to go through some of these to kind of figure out what will be the right level of difficulty for people to do when we set them up in the new building that I'm moving into here in Seattle.

CRAIG BOX: So if you like puzzles, please send us an email. And we'll set you up to play test. I haven't had a look yet. Will your puzzle examples translate, or will you have to be a Seattle native to understand them?

ADAM GLICK: Nothing in it requires you to be in Seattle or know Seattle, although when it's eventually hopefully set up in the building, obviously, you would need to find these pieces, versus right now, I just have them as image files. And people would just look at the images versus--


ADAM GLICK: You would hunt around the building to find them would be the first challenge.

CRAIG BOX: Ah. I like my puzzles in the crossword format these days. I've taken to the quick cryptic crossword in the "Times." I have to say, we've spoken a little bit about crosswords. They're not really a common thing in the US, the cryptic crossword.

While I was over there last week, I was doing the "USA Today" crossword, which is this giant grid-- they're all very close to each other, the clues. There's not a lot of spaces. There's a lot of information density.

But it's all just the question-and-answer type. I don't think the cryptic crosswordi s really a thing that's taken off in the US.

ADAM GLICK: It's not as big. "The New York Times" crossword puzzle is actually amazingly popular. There's actually a case study up on the Google Cloud site about how they use App Engine in order to scale to the tremendous demand that it gets. But it's a traditional crossword puzzle.

And they actually get harder by the days of the week. So it starts the simplest on Monday and the most complex and most difficult on Sundays.

CRAIG BOX: Yeah. So actually, I'm a "New York Times" subscriber. They have a daily "mini" crossword. It's a little 5 by 5, so it's tiny. And I have a couple of times where I've actually just gone through and managed to do the whole thing in one go. I've got a screenshot that I can show you, completing it in 13 seconds.

It's like, OK. You probably think I faked that. But I do like the cryptic crossword. It is a completely different challenge. I've got to the point now where I can occasionally solve the whole thing-- not every day, by any means. And they're all set by different people.

So sometimes, you have to be in the mind of a particular person to be able to do it. But just on an occasion, every now and then, I manage to make it all the way to the end. So hopefully, it's been about a year, I guess, I've been plugging away at this puzzle.

And there's a great blog you can follow which actually has experienced solvers go through and explain all the clues to you. So I will go through when I get stuck. And I'll say, all right. Well, here's what I didn't know. And hopefully, I'll get to the grandmother point, where you're able to solve a cryptic crossword on the day.

ADAM GLICK: I made one of the mistakes of yeah, you search the internet, and you wander into various spots. And I wandered into one of the puzzling forums, which is full of just some amazingly brilliant people who solve these things for breakfast, so to speak. And they post their own puzzles there, as well as the explanations after people solve them of how people did it.

And one of them, someone was just a puzzle they put out in a tweet that was just mind blowing. And I'll put a link to that, because it just blew my mind, this one tweet and how many different puzzles are packed into this.

CRAIG BOX: In 280 characters or less.

ADAM GLICK: Yeah, a short number of characters.

CRAIG BOX: Like code golf.

ADAM GLICK: Shall we get to the news?

CRAIG BOX: Let's get to the news.


ADAM GLICK: Kubernetes 1.16 is now in freeze. Release candidates will soon be out, and 1.16 is due in two weeks' time. We'll keep you up to date on the release progress.

CRAIG BOX: VMware has announced "Kubernetes Academy, brought to you by VMware," all in quotes, where a set of ex-Heptioids want to give you the Khan Academy treatment in five- to eight-minute video lessons.

VMware would like to remind you that, quote, "Kubernetes Academy, brought to you by VMware," is brought to you by VMware, though the taglines on the videos suggest that, quote, "Kubernetes Academy, brought to you by VMware" was renamed from, quote, "Kubernetes Academy, from VMware" slightly before launch.

ADAM GLICK: TechTarget's Beth Pariseau highlights the success of Knative and says that it will underpin the next generation of PaaS infrastructure in both public and private cloud. Her article highlights customers that moved from Heroku to Kubernetes and then talks to the team at Percy.io, who have been using Google Cloud Run, a managed service based on Knative.

Percy then chose to move to Cloud Run on GKE to get full flexibility to support the concurrency needed for their workload. And they were able to trivially make the change by just changing endpoints.

CRAIG BOX: The first beta of Helm 3 has been released. Eagerly awaited since at least episode 11, Helm 3 removes the Tiller cluster agent in favor of local security configuration, making Helm more namespace and RBAC-aware.

It couldn't come quickly enough for consultant Stepan Stipl, who this week wrote, "To Helm or Not to Helm," documenting drawbacks based on his experience with the tool. Many, if not all of them, will be solved by Helm 3. So we hope it has a smooth path to general availability.

ADAM GLICK: Etcd 3.4 has been released. The storage backend has a number of performance improvements. It no longer blocks reads for pending writes, especially when there are no pending rights. Long-running read actions do not block other writes and reads, leading to significant performance improvements in those actions. And changes to how leases were handled were also made to avoid blocking operations and to avoid auto-renewal of leases after a leadership election.

Improvements in election also helped to avoid challenges with network segmentation and slow networks that can trigger leader votes unnecessarily. The new change creates a pre-vote stage to help check if the instance is up-to-date before triggering an election.

Side note-- who knew machines had such well-functioning democracies? There is also a new node state, where a node joins as a learner and doesn't start voting until it has caught up to its leader's logs. This helps avoid changes in quorum that can cause instability issues.

Finally, the client balancer has been changed so that it uses round robin for clients, versus trying to maintain a list of unhealthy endpoints.

CRAIG BOX: Are you using Cert Manager, the popular tool from Jetstack to install Let's Encrypt certificates? If so, make sure you upgrade to the current version, as Let's Encrypt announced this week they intend to block old versions starting in November and then block outdated releases three months after a replacement is released.

Older versions could get stuck in a loop requesting certificates, thus DOSing the servicec. And while those bugs are hopefully gone, it's important to be aware of the dependencies in the software and services you use.

ADAM GLICK: Ifeanyi Ubah concludes a four-part series of deep-dive blog posts on Linux namespaces this week. Ubah started with a C program, which merely forks a specified program, and over the course of the series, adds support for the UTS User, Mount, PID, and Network namespaces. Just add cgroups for your own container implementation.

CRAIG BOX: Another great blog explainer, this time from Erkan Erol. He digs into "kubectl exec" which lets you connect to a process running in a Kubernetes cluster. Rather than my prior belief that it was powered by, quote, "magic," this tool makes a connection to the API server, which is then proxied to the node running the container. Erol explains with code and diagrams exactly how each step works.

ADAM GLICK: The CNCF has released their first Project Journey Report, which follows the history of a graduated project and attempts to correlate the work the CNCF does with the growth and success of the project.

The Kubernetes report focuses on the growth in members and diversity of sources in the project, highlighting increases in the number of companies, which grew from 15 to 315 in the past five years, and also the number of contributors, which has grown from 400 to over 3,000.

The CNCF also calls out that although Google is still the largest contributor and its contributions continue to increase, due to the growth of the project, the percentage of the project written by Google has steadily decreased, showing the health and breadth of contributions to the project. Although most major cloud vendors showed low-single-digit contributions to the project, AWS was notably absent, as it's hard to show a graph that approximates zero contribution.

CRAIG BOX: Japanese retailer Mercari are migrating a monolithic backend to a microservices architecture, which is, of course, a popular reason for adopting Istio. Vishal Banthia posts about their successful journey adding Istio to their multi-tenant production cluster. They adopted the mesh gradually, a good idea for any rollout, and are now using Istio for gRPC load balancing.

They found and worked around problems with the Kubernetes consistency model in their deployment, some of which are due to be fixed upstream, but all of which are worth being aware of. Expect to see more posts on the topic as Mercari's Tech Challenge Month continues.

ADAM GLICK: StackRox announced version 2.5 of the StackRox Kubernetes Security Platform. New features include a Network Policy Generator and automated process whitelisting, both of which baseline activities in the cluster and then block anomalies, powered by machine learning. An admission controller can also prevent the scheduling of pods based upon policies.

CRAIG BOX: At VMworld last week, Platform9 announced it had raised $25 million in a Series D round. This is the company that proudly proclaimed the first managed Kubernetes service on VMware, so we wonder how they enjoyed the rest of the show.

ADAM GLICK: Dell EMC has offered a preview of their PowerProtect software using Project Velero to provide backup and recovery for Kubernetes clusters, as well as to enable migration of persistent volumes.

CRAIG BOX: Aqua Security, authors of the kube-hunter tool for penetration testing, keep finding new things to put into it. Security researcher Daniel Sagi has published a write-up of a potential exploit, where an attacker could spoof DNS entries in the Kubernetes pod.

The attack works by ARP spoofing the IP of the DNS pod and then running a local DNS proxy on the pod you broke into. The default Docker Network setup makes this attack possible, but a different CNI plugin may prevent it. And a workaround can be applied by dropping the NET_RAW capability on pods you know don't need it.

ADAM GLICK: FireHydrant has released a guide to implementing dynamic informers for Kubernetes client apps. Using features of the client-go library, author Robert Ross is able to watch for events on many different objects in a cluster at once. FireHydrant uses this to watch for changes to things such as certificates and node events. Sample code is available from their GitHub repo.

CRAIG BOX: HashiCorp have preannounced some features being worked on on the Vault Secrets Manager for better Kubernetes integration. A Helm chart was recently released and will soon add support for injecting secrets into pods via a sidecar, meaning the app only has to know how to read files off a disk.

They are seeking feedback on two other features, a Sync Mode to regularly copy secrets from Vault to actual Kubernetes secret objects, and a CSI plugin to allow injecting secrets directly, similar to the sidecar approach from before. HashiCorp also announced Consul 1.6 is generally available.

ADAM GLICK: Maya Kaczorowski, our guest on episode 8, has posted a blog post about the recent Kubernetes security audit. Highlighted in the article are the relatively small number of vulnerabilities found in the audit and that none of the discoveries were fundamental to Kubernetes architecture.

The blog post also covers the benefits customers get from running in Anthos or GKE, stating that GKE automatically gets rid of some of the troubling default settings and makes the latest updates available to customers. Using node auto upgrades removes the need for customers to manage their nodes and make sure they are patched against future vulnerabilities as they are discovered.

Google Cloud also announced their managed service for Microsoft Active Directory, announced at Google Cloud Next earlier this year, has been released in beta.

CRAIG BOX: Red Hat has announced OpenShift 4.2 is now available as a developer preview by way of nightly builds. Over-the-air updates, enabled by CoreOS technology and continuous integration, means preview versions are now available incrementally.

They also announced that the nightly builds of OpenShift 4.2 are available on Google Cloud. If their automation worries you, they also have a hard-way mode, where you create your own Google Cloud infrastructure and install the software on that.

ADAM GLICK: Finally, AWS has released CloudWatch Container Insights to general availability. Container Insights take container logging information and import logs into CloudWatch from ECS, EKS, Fargate, and other Kubernetes running in EC2. Additionally, AWS has posted a deployment guide to running Weave Flux in EKS.

And that's the news.


CRAIG BOX: Ben Elder is a software engineer with Google Cloud and the creator of kind. Welcome to the show, Ben.

BEN ELDER: Thanks.

CRAIG BOX: Where did your Kubernetes journey start?

BEN ELDER: In the summer of 2015, I was a college student getting my computer science undergrad. And I saw this Google Summer of Code thing, where you could work on an open-source project funded by Google. And there was this new cool system project called "Kubernetes."

CRAIG BOX: Wow. Hope that succeeds.


BEN ELDER: And so this Tim Hockin guy had this issue about replacing this user space proxy to connect the traffic between the nodes and moving that into the kernel using IP tables to make everything more efficient. And that was one of the projects that they were looking for a student.

That sounded pretty interesting, so I wrote a proposal. And it was accepted. So that summer, I started working on Kubernetes then.

CRAIG BOX: So that's very early in the Kubernetes process.

BEN ELDER: Yeah. Actually, some of the code landed just before 1.0. And I believe the feature actually turned on in 1.1.

ADAM GLICK: What was it like working with Tim Hockin?

BEN ELDER: Well, so in this case, that was all remote. And back then, it was the Google containers, IRC channel, and GitHub issues. And I would say Tim was mostly pretty hands-off until later on. He wasn't super worried about it. It was when we were getting closer to landing it, he was paying a lot more attention to it.

And it was kind of understandable, too, I should add, that Kubernetes was doing the 1.0 launch then. I think people were pretty busy. So I want to be clear that Tim was my mentor for that project and that Tim designed all of these things. But I wouldn't claim that I worked with Tim then.

Now, having worked at Google now, working with Tim is awesome.

CRAIG BOX: Has he asked you for your favorite "Star Wars" character?

BEN ELDER: I've actually avoided that one so far. [CHUCKLING]

ADAM GLICK: What is kind?

BEN ELDER: kind is Kubernetes in Docker. It takes Kubernetes, and it makes it run in containers. So your nodes are actually containers running on your machine, which makes it pretty easy to debug and makes it quick. And it means that installing and running it is very easy, because you just need Docker.

CRAIG BOX: Now, if my nodes are containers, how do those containers run other containers?

BEN ELDER: We run a container runtime sort of inside the container. So they're privileged containers. Similarly, if you ran Kubernetes directly on your machine, it's going to kind of control the machine. To some extent, it's sandboxed in another container. And each of those containers is itself running a container runtime inside.

ADAM GLICK: That's a lot of "Inception"-like undertaking there. So if I understand that correctly, what you're saying is that you have a container that people think of a container. In there, you're actually running another system that runs containers. And then you actually embed containers within that?

BEN ELDER: Yeah. So we run one container on the host for each node. And then on each node, we're running a container runtime, where we're running arbitrarily many containers, including the Kubernetes components themselves. Then for added fun, we run this in Kubernetes CI, which is Kubernetes running containers.

And for cleanup reasons, we don't want to run containers directly on the host there. So we actually run Docker in Docker in a Kubernetes pod. And then we run kind in that pod, and we run containers inside that. So you get three layers deep.

CRAIG BOX: We'll put a diagram in the show notes. Do I need nested virtualization for this to work?

BEN ELDER: No. You do need privileged containers to do this sort of thing. And there are some other hairy things specifically to trying to run a Kubernetes inside a Kubernetes. But you do not need nested virtualization. And that was one of the reasons we wanted this.

CRAIG BOX: What exactly are privileged containers?

BEN ELDER: So privileged containers can pretty much control the machine. They have access to things that you normally wouldn't want to give them access to. You would use a privileged container for something like running your networking subsystem, where you expect to control the host.

In this case, it's Kubernetes and the container runtime kind of need permission to control the host a bit. So we do have to do that.

CRAIG BOX: If I do a 'docker [ps]' on the main host, am I seeing all of those nest containers listed there?

BEN ELDER: Yeah. And if you look at the processes, you'll see a tree of processes for the whole thing.

ADAM GLICK: What does that mean in terms of performance?

BEN ELDER: It's not as bad as you'd think. The overhead from each of those things is not too terrible. But the biggest thing is that you're just trying to cram a whole cluster onto one machine or rather a machine that's already running Kubernetes.

So any expectation of, like, oh, I'm going to have all this performance out of having a whole cluster of nodes, well, all of the nodes are on the same machine. So that's not true. It has to be divvied up.

ADAM GLICK: When did you first start working on kind?

BEN ELDER: That would be the summer of 2018, I believe. We had our first release in November. And initially, it was just kind of this small project in the test-infra repo for Kubernetes, tossed in with a bunch of other things.

And eventually, we decided to move it out so we could do releases and whatnot. And it went from just testing Kubernetes to being used for more things.

CRAIG BOX: What was the thing that it replaced?

BEN ELDER: We didn't really have anything. We were doing all of our tests by spinning up real clusters in the cloud. So we'd spin up a Google Cloud cluster. We'd spin up an AWS cluster.

A long way back, when I worked on Kubernetes in 2015, I used this Vagrant tooling that was on the repo. And I really needed that because doing the kube-proxy work for networking in Kubernetes, I needed to have multiple nodes and route traffic between them and make sure that worked.

But eventually, that tooling wasn't really maintained and was removed. So kind kind of fills something along that space, where you want to build Kubernetes and test it and have multiple nodes and that sort of thing. In CI, though, we didn't have anything like that. We were just using cloud clusters.

And those are a bit more expensive. And they can be slower, because we need to move more things around, like upload the binary somewhere and then run them. And it's not something that contributors can run, necessarily. They need access to those cloud resources.

CRAIG BOX: People will quite often use Minikube as a way of running a small Kubernetes cluster. Kind is a similar project, but different. Obviously, you mentioned being able to run multiple nodes, which is something you can do with Minikube. How would you contrast the differences between those two projects?

BEN ELDER: We link to a number of other projects that you should look at. Minikube is definitely a big one. Minikube, one of the things is if you actually run a VM, you have stronger isolation.

So if running VMs isn't a problem for you, then when you run VMs, it's easier to do things like actually have memory limits work correctly or maybe GPU device plugin support. Things where you want really tight control over the whole operating system are not going to be true with kind, because you're sharing the operating system across all of the nodes and with the host, whereas Minikube can run a totally different operating system.

ADAM GLICK: Would you use this in production?

BEN ELDER: No. Please don't! We've been asked that many times.

kind is primarily intended for testing, so there are some trade-offs. And one of the things that it's not terribly great at at the moment is long-lived clusters. It's really good at I want to spin something up, do some tests on it, throw it away. It's very quick to do that. And the idea is that you're just going to create, delete, create, delete all the time.

ADAM GLICK: What are the challenges with long-lived clusters people should be aware of?

BEN ELDER: Well, so a big one is actually garbage collection. It gets confusing to break up the host. And when you run Kubernetes, you're going to say, oh, when it hits 80% disk usage, we need to start kicking things out. But when we're running kind, we have no idea how much space you want to reserve on your laptop. And it's using the same disk.

So for the moment, that sort of thing we kind of have to just say, don't kick things out, because otherwise, you'll go to start the cluster. And it will kick out the API server, and the whole cluster will crash. That is a much easier thing to think about if you're running a VM and you have a dedicated virtual machine disk, and it's actually isolated.

There are also some issues where if you restart the machine, the containers currently won't cleanly restart. So you'll need to delete and recreate if you restart your machine.

CRAIG BOX: kind came from a need to test Kubernetes and run as part of its continuous integration. What other use cases have you seen?

BEN ELDER: It's being used to test applications and also extension points. So Istio is using it quite a bit right now. We looked at some other projects using it. It's actually kind of been a little bit difficult to keep up. But I know some other companies are using it.

I've heard we have some friends in the community at VMware that are contributing and using it quite a bit there. It's also used, I think, by default for Cluster API, Bootstrap. The Cluster API uses Kubernetes controllers to create all the machines and the cluster objects and manage the cluster.

So they kind of have a bottom turtle problem, where if you're going to use a cluster to run a cluster, where do you get a cluster from? So one of the answers there is, well, you create one with kind for a moment, get your real cluster up, and then throw away the kind cluster.

ADAM GLICK: You mentioned that you heard about other companies using it. Who are the other companies that are involved in building it?

BEN ELDER: VMware is definitely a big one. Also, there's an individual at SUSE who has sent a lot of really great stuff to get IPv6 working, in particular, which is pretty cool. I believe we were the first conformance-tested and passing cluster for IPv6, given that we also had to fix some of the tests. And that is Antonio over at Susi.

CRAIG BOX: I did see that IPv6 implemented first on kind. So congratulations. What things did you need to change or what about kind made it possible to hit that milestone?

BEN ELDER: I think because it's all local and it's running with containers, it's really easy to tweak everything about the cluster top to bottom and debug it. So it was kind of easy to iterate on. As far as the tests, a lot of things were doing things like, oh, I'm going to make sure DNS works.

And well, DNS has totally different records for IPv6 and IPv4. And a lot of things were just not written with the idea that you might use IPv6. A lot of things were hard-coded to use local hosts with an IPv4 address, that sort of thing.

CRAIG BOX: I hear a lot on the Kubernetes project about the idea of end-to-end testing. It's not something I've ever had a chance to dig deep into. So can you start by explaining what an end-to-end test is?

BEN ELDER: So with an end-to-end test, you're bringing up the whole system, the entire thing-- so in this case, Kubernetes itself. We bring up a whole cluster. And then you're going to test that the entire system behaves as you would expect.

So instead of, say, isolating one component and testing that component, we're just going to bring up a cluster. And we're going to check that we can create a pod and that the pod runs and it has the output that we'd expect.

CRAIG BOX: So I understand that we have the ability to create a cluster with something like GKE or something like kind. Now, we have the cluster running. What is the actual infrastructure that's used to run those end-to-end tests?

BEN ELDER: In Kubernetes's case, it's another Kubernetes cluster.

CRAIG BOX: It's always another cluster, isn't it?


ADAM GLICK: Turtles all the way down.

BEN ELDER: Turtles all the way down. That's actually Cluster API's logo.

So we have another cluster, and we have it running a pod that's running the test execution. And in kind's case, that pod is also running the cluster that we're testing. In the GCP or AWS case, we're actually creating a cluster somewhere else and talking to that from the test.

CRAIG BOX: Have you written custom runners that perform all those tests? Or is that something in an open-source situation that you'd pick off the shelf?

BEN ELDER: Yeah. So we have. There are a number of them for Kubernetes itself. But the general idea is pretty much that you're going to hook up your little test harness. And it's just going to create a cluster, set up and run your tests, delete the cluster, maybe collect some logs. We have some examples of that in our documentation.

CRAIG BOX: And if I'm an application developer and I want to test how something works-- let's say I'm building an operator for deploying some other piece of software on top of Kubernetes. How might I do end-to-end testing of that?

BEN ELDER: Well, depending on your CI platform, you'll set up kind on that, per our docs. And then it will really depend on your application how you end-to-end test.

But in the very broad stroke, you're going to create that cluster. You're going to deploy your application or your controller to it. And then you're going to write some tests that maybe look like unit tests. But what they're actually doing is calling out to the cluster and creating real resources there and then asserting that if it queries them, that they look like you would expect them to and that sort of thing.

CRAIG BOX: Is that something that people who are used to running all this locally have to make a mental leap about how things might work in the distributed environment?

BEN ELDER: A bit. It's also nice because with kind, we've made it very possible to do this completely local, totally offline, and avoiding some of the flakiness and pitfalls you might have from having to move things around. So we tend to recommend you should still do things like unit tests. You should do integration tests.

But particularly, if you're talking about, say, a controller, there's enough interaction with the whole system that you might want actually end-to-end test there. And then we're going to tell you, think about end-to-end testing with kind.

But before you go to production, you should also test in a more production-like cluster, because kind may actually help surface some things that you might not see there. Say it's a more resource-constrained situation. You might see race conditions more.

CRAIG BOX: Or your disk filling up.

BEN ELDER: Or your disk filling up. But your production cluster, there's still going to be small things that are different. Maybe your production config's a little different.

CRAIG BOX: Obviously, there'll be latency between nodes that you wouldn't have if you're running all the nodes on the single machine.

BEN ELDER: Right. Your latency may be different. The scale that you're able to run it at may be different because your local machine just doesn't have the horsepower to really scale-test, that sort of thing.

CRAIG BOX: If I'm using Kubernetes to deploy software, kind is something you've mentioned is used in the testing of Kubernetes itself. Am I as a user likely ever to have a need to come across kind?

BEN ELDER: Probably not. If you want to play with something-- we say "testing," but testing is not always necessarily like, oh, I've got some rigorous CI testing.

It can also just be I'm trying something out on my machine. It's a reasonable choice for that. It's pretty easy to set up. In most cases, we'd like to think it just works.

ADAM GLICK: Are there any anti-patterns for running kind?

BEN ELDER: Probably the biggest one right now is just using one kind cluster for everything internally. They're meant to be really cheap to spin up. You should just use a clean cluster all the time. You're going to get more repeatable testing.

ADAM GLICK: How fast does it spin up a cluster?

BEN ELDER: So if you're only doing one node on a fast machine, the image is already pulled, you have kind installed, it's between 20 and 30 seconds. It'll be higher than 20, but it'll be under 30.

CRAIG BOX: That's a very precise measurement.

BEN ELDER: I pay very close attention to this. We try to keep it down.

ADAM GLICK: When you say it's a fast machine, are you talking someone who's got a high-end laptop or desktop? Are you talking about a server? What do you mean when you say that?

BEN ELDER: A developer workstation. Maybe not an AMD Threadripper or something, some huge gaming rig. But--

CRAIG BOX: But a 486 or higher.

BEN ELDER: Right. Maybe not your netbook or something, but certainly not a large server.

CRAIG BOX: Is kind explicitly designed to be able to run on resource-constrained environments?

BEN ELDER: Yeah. So we paid a lot of attention to that. We need to do more digging into that.

But as a simple example, Docker for Mac, you configure how much resources it has. If you stick that to the absolute minimum, you can run kind and still have overhead for running your applications and whatnot.

CRAIG BOX: Docker for Mac provides a Kubernetes installation. Do you foresee tooling like that starting to use kind as the way that they provision Kubernetes in smaller environments?

BEN ELDER: I don't think Docker for Mac will, and there are good reasons not to do that. But kind is kind of nice, because you can dig into it so deeply. And it's so flexible.

But in Docker for Mac's case, they're shipping this OS that they control. And if you're doing that, then it's certainly possible to run Kubernetes with less overhead and more tightly control everything.

CRAIG BOX: What is the relationship between kind and things like kubeadm that are designed to deploy the Kubernetes components?

BEN ELDER: So kind is built on kubeadm. We use it heavily, and they're working with them fairly closely. They use kind for a lot of the testing of kubeadm. And that is one of the intentional goals.

CRAIG BOX: And you mentioned the Cluster API before. Am I able to use the Cluster API to provision kind clusters?

BEN ELDER: So actually, I have not been able to keep up too well with this project. But there is a Cluster API Provider Docker that is using some pieces of kind, in particular, the way that we package up Kubernetes to run everything. It is not one-to-one kind, necessarily, but it is derived from it a bit. That was, I believe, the goal.

ADAM GLICK: Are there any plans to have a kind-like project that works in other containers?

BEN ELDER: Yeah. So actually, that's the thing I'm possibly most excited about right now. We would like to support Podman. We're also looking at things like Ignite, where it's kind of blurring the lines between VMs and containers, or CADA, where I can take this container image that I built for running kind with Docker.

But I can run it in a VM. And if that's not a problem for you to run VMs, you get that tighter isolation.

CRAIG BOX: Do I actually need the "d" for Docker? Or can I run this on containerd, for example?

BEN ELDER: Kind runs containerd internally. But currently, it cannot run against containerd. That is another target we'd be interested in the future.

Shipping to Docker hit us a really wide set of users. And it's something we were already pretty comfortable running. And it's fairly easy to develop against. But long term, we'd love to support pretty much any container runtime feasible.

ADAM GLICK: Craig, are you asking what is the next of kin of kind? I just had to put that in there.

CRAIG BOX: I was going to ask, are there any situations where you could see a negation, where you have an unkind?

BEN ELDER: Well, hopefully not. But we have joked about-- for example, kinder is a project in kubeadm to add some extra fun tools to it that are particularly useful for developing kubeadm. Kindest is the Docker registry where we store the images. So we've had some fun with that now.

I think in this case, we'll probably just focus less on the acronym. But since the name is "kind," just it's fine.

CRAIG BOX: It helped get it out there. How did you build a community around this project?

BEN ELDER: I would give a lot of credit to James Munnelly over at Jetstack who helped us launch this. When we were first kind of looking at this idea in the testing community in Kubernetes project, they had some similar issues at Jetstack. They use a lot of the same stack and contribute back to it.

And they were also looking at, how can they run these little local clusters in the same kind of CI? And James came in and discussed a lot of the ideas and then further helped with a lot of the community building, getting things set up, and helping answer people's questions and design and code review and just everything.

But beyond that, I would also say that our friends in the SIG Cluster Lifecycle community, especially around the kubeadm project, have done a lot to evangelize and help contribute important features and debug and that sort of thing. They've been really instrumental in building the community out.

ADAM GLICK: What's next for kind?

BEN ELDER: So definitely the multiple runtimes. We're also just looking to make it easier to use, more robust. Put a lot of focus into improving the code quality recently. We're looking to make it easier to embed into TestRunner.

So if you're writing some go program to test your code, kind is write and go. You can just sort of embed what the command line uses and invoke it directly from your test harness. But we need to do a few more things to make it easier to consume in that case.

CRAIG BOX: Where should people go if they want to try out kind or learn more about it?

BEN ELDER: It's in the Kubernetes SIGs GitHub organization. And we have a website at kind.sigs.k8s.io. And the repo is at sigs.k8s.io/kind.

CRAIG BOX: And we'll have all those links in the show notes.

BEN ELDER: And we'd love to have everyone come check it out. File feature requests, bugs. Contribute. We've got some docs for all of these things.

CRAIG BOX: All right, Ben. Thank you very much for joining us today.

BEN ELDER: Thanks for having me.

CRAIG BOX: You can find Ben on Twitter @BenTheElder.


ADAM GLICK: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod. Or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: Make sure to drop us a line if you want to test Adam's puzzles. You can also see our website at kubernetespodcast.com, where you will find transcripts and show notes. Until next time, take care.

ADAM GLICK: Catch you next week.