#41 February 19, 2019

Ingress, with Tim Hockin

Hosts: Craig Box, Adam Glick

The history of Borg influences the history of Kubernetes in many ways: Google has different teams handle “get traffic to a cluster” and “serve traffic”, so Kubernetes has a conceptual split here too. Tim Hockin, Kubernetes co-founder, Google principal engineer and former Borg/Omega team leader, joins Adam and Craig to explain the history and future of the Ingress API, why it’s taken so long to get to v1, and how it might evolve in the future.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the Week

ADAM GLICK: Hi, and welcome to the Kubernetes Podcast from Google. I'm Adam Glick.

CRAIG BOX: And I'm Craig Box.

[MUSIC PLAYING]

ADAM GLICK: So how was your holiday, Craig?

CRAIG BOX: I enjoyed a very presidential holiday. It's fun being an American. It's a pleasure to get to do it every now and then. I went and stared at the Pacific Ocean. It's not really Pacific Ocean weather on this side. When I get to New Zealand in a couple of weeks' time, it will be much more appropriate Pacific Ocean weather. And I shall stare back at America across from the other end.

ADAM GLICK: Did you see Willy?

CRAIG BOX: He wasn't there, I'm afraid.

ADAM GLICK: Ah, ever since he got free, you just never know where he's at.

CRAIG BOX: Yeah, actually, Richard Stallman likes him to be called a GNU/Free Open Source Willy, these days.

[CHUCKLING]

How was your holiday?

ADAM GLICK: It was good. I actually heard a fascinating debate on another podcast that I really enjoy called "Intelligence Squared." They had a debate about can an AI change your mind. And it was a debate between a human debater and an AI. And the AI is both listening to the arguments being made by the other person and is arguing through text to speech.

And it is a fascinating listen, too, just from a technology standpoint of the state of what AI is capable of. And if someone has an hour to spend and is curious, it is worth the time.

CRAIG BOX: On a similar note, I can recommend a series of podcasts by [Jason Snell and] John Siracusa named "Robot or Not," where Siracusa goes through and gives [Jason] his opinion on whether or not various characters in movies meet this very tight criteria for whether something is in fact a robot. Things like is the Terminator a robot versus is it a cyborg. And there are rules.

But he has an inimitable way of getting his point across. And it's little bite-sized episodes. It's well worth a listen.

[CHUCKLING]

ADAM GLICK: It sounds fantastic. Just hand out little stickers that they can put on their chest. You have robot, cyborg.

CRAIG BOX: Unfortunately, I'd say most of them are only a couple of minutes long. After about a hundred of them, they ran out of robots. And then it's just basically John giving his opinion on is anything a member of any other arbitrary category. Like is a hot dog a sandwich?

ADAM GLICK: You know, there is a steep debate on that, as it turns out.

CRAIG BOX: Yes, and you can hear all that on "Robot or Not."

ADAM GLICK: Excellent.

CRAIG BOX: Should we get to the news?

[MUSIC PLAYING]

ADAM GLICK: Arm has joined the CNCF as a gold member. Arm is a chip design company. And you probably have one of their CPUs in your pocket, right now. Their chips are most often found in cell phones, IoT devices, and other places where low power consumption is more important than raw computing cycles. A number of cloud vendors have recently added Arm based instances.

And we hope to see Kubernetes expand further towards the network edge. Arm also owns Treasure Data, who makes Fluentd one of the earliest projects to join the CNCF.

CRAIG BOX: The Cilium project has released version 1.4 of their kernel accelerated network driver for Kubernetes. Highlights in this release include Kubernetes service routing across multiple clusters, transparent encryption, and IPvLAN support, all promising better security and performance. This version also includes instructions for adding Cilium to a GKE cluster by installing it as a CNI driver on each node and reconfiguring the kubelet to use it.

While not a currently supported configuration, this project is one to watch.

ADAM GLICK: IBM has followed Google Cloud's announcement last year of a serverless add-on to GKE with their own managed Knative add-on for their Kubernetes offering. Using the new light board medium, also known as grease pen on glass, Sai Vennam talks through a description of IBM's new offering and what's available as an experimental feature.

CRAIG BOX: Also, the happy recipient of a light board for Christmas, Microsoft's Brendan Burns. He has posted six videos talking about Kubernetes basics and his favorite topics, writing back to front all the time, including the semantic difference between "serverless Kubernetes" and "serverless on Kubernetes".

ADAM GLICK: Two announcements from Amazon this week. First up, Bonjour, Namaste, and Alright?from EKS, which has found its way to their Paris, Mumbai, and London regions, respectively, bringing the total available to 12, or "just over half" of the current regions.

Secondly, a new CNI plug-in for EKS provides the headline feature of jumbo Ethernet frame support on selected instances.

CRAIG BOX: The CNCF has posted the schedule for Kubernetes day in India. The single day, single track event is the first in the series of Kubernetes Day events the CNCF is bringing to locations outside of the traditional KubeCon cycle. Starting in Bangalore on March 23, the keynote will be given by Liz Rice from Aqua Security, our guest on episode 19.

Early bird tickets are now on sale. And there is even a 20% discount for Kubernetes Meetup members.

ADAM GLICK: Finally, more validation from tech publications that Kubernetes is the winning choice. The Information, a subscriber only website, held a panel at the IBM Think conference recently and suggests Kubernetes is emerging as a platform for all kinds of computing, pointing out that AI and serverless are two leading areas.

CRAIG BOX: And that's the news.

[MUSIC CHIME]

ADAM GLICK: Tim Hockin is a principal software engineer at Google. He's a co-founder of the Kubernetes project and pays attention to topics like networking, storage, node, multi cluster, resource isolation, and cluster sharing. Before Kubernetes, he worked on Google's Borg and Omega projects, as well as the Linux kernel. And before that, he enjoyed playing at the boundary between hardware and software in Google's production fleet.

Welcome to the show, Tim.

TIM HOCKIN: Thanks, nice to be here.

CRAIG BOX: You've worked at Google a very long time. How long, exactly?

TIM HOCKIN: It will be 15 in June. 15 long years.

CRAIG BOX: Our last quarterly report published said that there are 100,000, give or take, full time employees here at Google. Where do you fit in that number, today?

TIM HOCKIN: Depending on which number you look at and how you slice and dice it, it's somewhere between 2 and 1/2 and 2 and 3/4 nines. So 99.4%, 99.7% -- somewhere in there.

CRAIG BOX: Yeah, we'll cut it at three. And we'll say you've got a very high uptime, as far as Google employment is concerned.

TIM HOCKIN: Yeah.

CRAIG BOX: I understand now, it's dead person's boots. You have to have someone leave who's been here longer for anything meaningful to change to that number.

TIM HOCKIN: That's right. The new hires don't change it. It's a drop in the ocean. But when somebody who's been here longer than me leaves, we actually feel it.

ADAM GLICK: What was it like back then?

TIM HOCKIN: It was a very different company. Most of the infrastructure organizations sat on one floor. That included SREs, and kernel, and Borg, and everybody. And we all knew each other. And we knew where to go if there was a problem. And if something went wrong, you ran over to Bogdan's office. And you bothered him.

And he was the SRE who knew everything, right? And the Borg team sat right next to us. In fact, we just crossed projects a lot. And they would work on the kernel. And we would help them with their Borg stuff. So it was a much smaller sort of family, much tighter. A lot looser-- there was a lot less stress about keeping things alive. There was a lot less products that Google had to offer.

So it was a lot more focused.

CRAIG BOX: And you joined to work on the kernel directly?

TIM HOCKIN: I joined to work on the kernel. I joined in the operating systems team. And I worked on that for about six seconds before I switched into firmware and bios, where I worked on the very edge of the hardware and software for a few years.

ADAM GLICK: You've been around long enough to kind of pick the things that you want to work on and have the knowledge and depth to do that. How did you pick what you wanted to work on for Kubernetes?

TIM HOCKIN: Kubernetes, in some sense, was right place, right time. I was working in the platforms organization doing machine management, and cluster monitoring, and those sorts of things, which is very interesting. But the scope was growing. And I could see the things that were happening with the omega project starting up. And I wanted to be part of that. I wanted to grow the scope of what I was doing.

So I switched into the Borg team, which was owning those projects. Then picked up the Borglet team here and worked within that space for a bunch of years. And as the whole Google Cloud revolution started to happen, internally, and we started to take it really seriously, we saw an opportunity to bring Borg out to the world. And when we got funding for that, it was just too good not to be part of.

CRAIG BOX: We spoke with Dawn Chen a while back, and she talked about some of the steps we had with LMCTFY, et cetera, to open some of these things up. What was the process like at that time, bringing some of those Borg pieces out versus the big investment in Kubernetes?

TIM HOCKIN: Google has done open source forever, right? And open source is a big part. So we had pretty clear processes for how to do it. So LMCTFY, Let Me Contain That For You, was an initial shot at open sourcing some container tools. It wasn't really intended to compete with Docker or to even fill the same market as Docker.

CRAIG BOX: Or else you would have given it a pronounceable name.

TIM HOCKIN: It was really hard to get names past trademark. And I thought the joke was good. I wanted to call Kubernetes, "Let Me Schedule That For You." But I got shot down. So--

CRAIG BOX: You heard it here, first.

ADAM GLICK: We'll rename the podcast, right now.

TIM HOCKIN: So Let Me Contain That For You was literally our internal Google tools, like line for line, exact code, being refactored and moved into open source. And we have a process for that. And that was great. And we learned a ton by that. And we put it out when we got some good feedback on it. But at the same time, Docker was happening. And Docker had a very different UX.

So we ended up rolling with that. But the big investment for Kubernetes was really convincing people that this was going to happen. That once Docker caught hold, and it was catching hold, people would quickly move on to the, "I need something to manage my Docker nodes." And this was a case of, "we know how to do this," right? We have a literal crystal ball.

We can look into it and tell people what they're going to need next. And so far, we've been pretty accurate. Not 100%, because the real world is different than Google. But it was a matter of making that argument that this is going to happen with us or without us. And it would be good for us to be part of this.

CRAIG BOX: I remember seeing a slide that Brian Grant put up at a conference once, which said there are 200-ish APIs in Kubernetes, now. But when it first launched, was it about eight?

TIM HOCKIN: Something like that, yes.

CRAIG BOX: Do you remember what were the things in the very first open source release of Kubernetes? Which APIs?

TIM HOCKIN: The very first APIs were Node, ContainerManifest (which became Pod, eventually). Service, Endpoints, ReplicationController, and that might be it.

CRAIG BOX: So the concept of a service was there right from the beginning. Was the concept of connecting that to a load balancer-- was that in there from the start?

TIM HOCKIN: It wasn't there from the very beginning. Remember that Kubernetes draws from Borg's philosophical point of view. And within Borg, there's a very distinct split between running a thing in a cluster and bringing traffic in from the outside world.

CRAIG BOX: Right.

TIM HOCKIN: We have different teams who manage those things. They're very different concepts. And so we put out the first versions of service. And we said, here's how you group your applications together. Here's how you define a VIP for them. And here's a whole bunch of affordances for how to route traffic to them. How you get traffic into it is somebody else's problem, right? That's an edge problem.

And it became pretty clear that nobody knew what to do with that. It just was too confusing. It was too abstract for people. And so we added an API that said, create me an external load balancer at the same time, which was sort of hastily done. It was not a great API. But it was enough to convince people that, aha, that's the piece I was missing. So before we got to 1.0, in the run up to 1.0, we retooled that into the Service "type" API.

So now, you have type of NodePorts and type of LoadBalancers. And that was designed to accommodate different cloud providers. The way Google does it is different than the way Amazon does it. So it was there in 1.0, but it wasn't there in like 0.1.

CRAIG BOX: And so when was the concept of an Ingress introduced? What happened to bring that about?

TIM HOCKIN: Ingress, that's a great question. I forget if it was before 1.0 or if it was shortly after 1.0. The idea of an ingress was people want higher level processing, specifically HTTP, but possibly other things. And it was named because it was intended to be sort of a front end for services. This is how you get traffic into your services, a la Ingress. The rest of it never really manifested.

The HTTP parts were the parts that people stuck with. And we never followed up with the TCP stuff. So that was the impetus for it. We wanted to provide access to higher level concepts that people could use.

ADAM GLICK: What do you think you got right in that versus what you got wrong?

TIM HOCKIN: There's a lot wrong with Ingress, frankly. I'm allowed to say that, because I wrote it, or at least helped. I think the parts we got right are it's relatively simple. It's approachable for people who aren't experts in HTTP. And it's very self-service, which is nice and convenient. The parts that we got wrong are it's really simple. It is the lowest common denominator API.

So there's not a lot of features there, unless you go into the non portable space with annotations and stuff. And annotations are a really clunky way of extending the API. They work sort of onesie, twosie, but they're not good for complicated, structured stuff. I think that's the biggest place where Ingress sort of falls down. It just doesn't fit what people really want to do with it.

CRAIG BOX: What is the base feature set? You're talking about annotations. There are a few fields that are in Ingress. And then there are a lot of people who have used the annotation mechanism in order to extend that. What are the base features of an Ingress?

TIM HOCKIN: The base features of an Ingress are given a host name and a URL path, map it to a service back end. That's pretty much it. We also have an affordance for TLS. And that's it. There's no upgrade my HTTP to HTTPS. There's no send to redirect, none of the advanced functionality that people expect from something like NGINX or Envoy.

Just those things don't exist in some of the cloud load balancers. And so we played to the lowest common denominator.

CRAIG BOX: It was designed explicitly with the cloud load balancers as a model, versus say, like Apache VirtualServices or something someone else might have used to redirect traffic in the past?

TIM HOCKIN: Yeah, it was designed to be usable in cloud environments where you didn't want to run the thing yourself. You wanted to use the hosted stuff. And so we said for the initial design, let's stick to the lowest common denominator. We didn't really have-- we still don't really have, within Kubernetes, a concept of optional features. It's not really formal that way.

And there's not many APIs that have features that don't work in some environments. We were really sticklers for the same experience works everywhere. It's not 100% true. And a lot of the places where it breaks down tend to be around networking. Surprise. But we said we would come back and revisit this later if we thought that Ingress was too weak for people to actually use.

That's sort of where we are, now.

CRAIG BOX: Ingress was one of the first APIs where you define an object, and then you have interchangeable controllers. You define pods, and then it's up to the kubelet, for example, to place them. But you actually need a controller running in your cluster. And people will generally provide-- like GKE provides a controller to provide a Google Cloud load balancer.

But then there is also some sample code around, but no real, official default. Like the Kubernetes project does not say what it means to be an Ingress in the absence of a cloud provider. Has that helped or hurt?

TIM HOCKIN: I think it helps in the sense that it has fostered ecosystem, which was part of the goal-- was to get people who weren't us to provide solutions. So the NGINX Ingress, for example, is written by community people and has received wonderful uptake. Lots and lots of people use it. It's very full featured. It works very well. And that's fantastic, right?

If we had decided to write it ourselves, it probably would not have been so full featured. So I think it's great that we had community people step up. It has certainly been a bit of friction for people who are adopting Kubernetes to comprehend that the API is there, but the implementation is not. This predates CRDs or ThirdPartyResources. So there really wasn't any other way to do this.

So yes, that has certainly been a point of confusion for some people.

CRAIG BOX: Did it help guide the development of CRDs and third party resources, the idea that you have an implementation separate from the API object?

TIM HOCKIN: Not really. It's always been in the back of my mind that Ingress probably would've been a perfect example of a CRD. But we wanted to write a common API. We didn't want divergence. We didn't really want an NGINX Ingress, and a Google Ingress, and whatever. Because then people don't get any portability benefits. Now, it turns out some large plurality of people end up using non portable features via annotations, anyway.

So maybe that wouldn't have been so bad, in hindsight. These are the conversations we're having right now around Ingress.

CRAIG BOX: Ingress is still beta, today. Why has it stayed in beta for so long?

TIM HOCKIN: We left it in beta because we weren't sure that we wanted to take it to GA. We sort of thought, well, we know we want to do better than this. But we're collecting data from users. It's been a topic at several KubeCons of what do we want to do with it. How do we want to proceed? We've done surveys and collected anecdotes from lots of users and from implementers, too, trying to figure out how we want to push it forward.

It's clear that it's useful. And it serves a purpose for a lot of people. But a lot of people simultaneously expect it to do more, to have more features, and they expect it continue to be portable, which are sort of mutually exclusive. So we've sort of not pushed forward with it on GA. We are, in fact, pushing forward with it on GA, now. Because I have to admit that it is a de facto GA.

People are using it. They're using it in production. And the fact that it's labeled beta does nothing but inhibit people from actually using the system, overall. So let's just call it GA. Maybe it won't ever evolve any further. We're working on what will v2 of this thing look like? It might look wildly different. It might have a different name. But we'll support Ingress. And we're going to support it, either way.

CRAIG BOX: So while we move towards the idea of a changed Ingress in v2 or something, there have been implementations of services from Google and from other people that address the same kind of space. I'd like to talk through a few of them. So first of all, we developed a CRD called the BackendConfig to represent some of the objects in the Google Cloud load balancer that are not mentioned in the Ingress.

Some of them started out as annotations. What was the process like of building out our implementation of this?

TIM HOCKIN: So the Google Cloud HTTP Load Balancer has a ton of features that are interesting for users of Google Cloud, right? There are different ways to attach to different back ends and metadata about load balancing algorithms and those sorts of things. And they're not generally applicable to the rest of Ingress, right, or to other users of, say, NGINX.

So we started out by putting them in as annotations. And annotations are a very flat structure. There is no way to represent, given a list of things, which items in that list do I want to apply this information to. So you end up building a really clunky, sort of parallel, "stringly-typed" API embedded in your YAML. Literally, it's JSON formatted strings embedded in YAML, which is just not a pleasant user experience.

So we worked with what we had. And so as we wanted to add more and more of these features, we have user demand for turning on these features. Some of them are as simple as a Boolean feature per back end service. And the Google HTTP API is much more data normalized than Ingress. Ingress is one resource that tries to do everything. And Google's API is like 800 resources that are all very specific.

And there's pros and cons to each model. So we talked through with users of how we wanted to evolve this. And we decided that a CRD was the right way to express this. So now, we have this BackendConfig CRD that you sort of tag along with your service. And it provides metadata that Ingress can use to figure out how to route data to your service. And the uptake on that seems to be pretty good.

It seems to be flexible enough. And it's easy now for us to add new features for the Google Cloud users.

CRAIG BOX: One of the earliest open source projects that Heptio released was an Ingress controller on Envoy. And they branded it, I think it was called--

TIM HOCKIN: Contour.

CRAIG BOX: Contour, thank you. And they worked on that for a while. And then they developed the concept of an IngressRoute, which, again, is a CRD. How would you explain the concept of an IngressRoute?

TIM HOCKIN: Sure, IngressRoute is a proposed alternative to Ingress.

CRAIG BOX: OK.

TIM HOCKIN: It is similar to Ingress in scope. It is aiming at the same sort of user base. It has more features than Ingress does. It's specifically targeting their Envoy based implementation. Envoy is a very feature rich load balancer proxy. So they're aiming at that feature set. But they made a bunch of design decisions in there that are in response to real user experiences with Ingress.

Things like annotations are really awful, and let's just not. Also, they have some concepts of explicit delegation, which are kind of nice. They make it sort of easier to reason about. And they decomposed a little bit. So instead of having one big, monolithic Ingress resource, you can have a bunch of smaller Ingress routes that sort of assemble into a rule set. And as an API overall, it's a fairly nice API.

CRAIG BOX: Could it be implemented by something other than Envoy?

TIM HOCKIN: Absolutely, I think it would be no problem to implement it in terms of NGINX. I don't know that anybody's tried it, yet. But I don't see why it--

CRAIG BOX: Should it have been called the EnvoyRoute?

TIM HOCKIN: No. I think that they've done a good job of keeping it fairly neutral. There's a lot of stuff in there. So it may be that some of the Envoy-isms sort of leaked into the API. If we were to push ahead with this API, which is a possibility for Ingress evolution, then we'll have to look at it with a hard eye towards that and make sure that we can come up with some alternate implementations.

CRAIG BOX: Another project based on Envoy is Istio. Istio has had two different implementations of something which is Ingress-like, but I think it doesn't actually use the Kubernetes Ingress concept.

Would you say, first of all, that theirs is more based on, you said before, Borg has the idea where a different team gets traffic into your cluster? Is that how they've developed? They've said the Istio team will be responsible for bringing things into your Kubernetes environment? And then do you see a path between those two things?

TIM HOCKIN: Yeah, Istio, like Kubernetes, derives from Google internal concept space in many dimensions, not just the one you mentioned there. Istio, as an API, is very full featured, even more so than IngressRoute. It really uses a ton of the functionality that's available in Envoy. At least in theory, Istio could be implemented against other proxies. I don't know of anybody who's done that.

But in theory, it's doable.

CRAIG BOX: There was, for a while, some NGINX implementation. But I don't think it's still maintained.

TIM HOCKIN: Yeah, that was what I understood. So I don't know if anybody's doing it, right now. The API, though, is actually really, pretty nice. It takes some different trade-offs from Ingress Route and Ingress. Specifically, it looks at things like TLS and says, there are some administrator-ish things. And we're going to put those in one resource. We call it a gateway. And that declares this is the actual port of entry for my traffic.

And then there's the rule mappings and how I want to map a URL into a back end service, and how I want to do traffic splitting, and those sorts of things. And that's something called a virtual service. And that's more like a URL map. And they split, because they're different responsibilities. And they have a couple other concepts, too. But for parallels to Ingress, those are really the two that matter the most.

So this is another possibility with respect to Ingress evolution is maybe we want to take some ideas from Istio.

CRAIG BOX: The Istio team made a big change in the 0.7, 0.8 time frame, if I recall, to build this out. Did you have any involvement, from the Kubernetes side, in looking at what they had done to start with? And do you know the story of why they decided to make that change at that point?

TIM HOCKIN: I don't. They have been largely running without-- I'm not going to say without contact, but they've been doing their own thing. They're very smart, and they've looked at Kubernetes and figured out how it works, and really taken it apart, and found ways to work within the bounds of what Kubernetes offers. In fact, sometimes too much so. There's things of Istio that I look at, and I say, "boy, if you had just asked us for a change in Kubernetes, we would have done it. And it would've made your life easier."

So there's still opportunities there that we can make it easier to build things like Istio. Istio's an example of this product space, but not necessarily the only such thing. And they've run without our involvement very far. I only really started becoming deeply aware of their APIs when they hit v1alpha3, which I guess is very different than their v1alpha1 APIs.

And the v1alpha3 APIs are the ones I was mentioning earlier.

ADAM GLICK: You mentioned that you're finally making the decision to move this to GA, because its de-facto GA, anyways. When will that GA be official?

TIM HOCKIN: There's a KEP open, right now. And I think there's even a PR open that's in my cue to review to start the process of moving it from the extensions API group into the networking API group. And then the follow up to that will be to move it to GA. So probably 1.15. So not this upcoming release, but the one after would be the earliest opportunity for GA.

We have a short list of less than 10 things that we're thinking about changing or fixing, if we're going to move it to GA. And then we're just going to put a bow on it.

ADAM GLICK: So in the next five to six months?

TIM HOCKIN: Hopefully, yeah. Worst case, one more beyond that. I don't see it going beyond 1.16 for the GA of this.

ADAM GLICK: 2019.

TIM HOCKIN: Yes.

CRAIG BOX: And do you think that there is a consolidation? Do you think all the different things we've mentioned before, that the base level Kubernetes Ingress object or something like it could be broad enough to encompass supporting all of those projects in some future version?

TIM HOCKIN: Possibly. I have a lot of questions in my mind, here. And part of the reason we're moving slowly on this topic is there's a lot to cover. And I don't want to burn another two years with an API that we don't want to support. So there's possibilities that we want to adopt ideas from Istio or steal ideas from Istio. Part of me says, though, I don't really want to be in the business of maintaining another HTTP API that we have to keep pace with industry standards and capabilities of various back ends.

That's hard. It's a big API. The Istio API's big. There's a lot of really subtle functionality there. And you have to be something of an expert in HTTP to really capture those things in a good API. This is my biggest concern that we have right now, is Kubernetes has a boundary of what Kubernetes is and what Kubernetes isn't. Kubernetes is not Istio. And I don't think we can ever say you have to run Istio to be part of Kubernetes or to use Kubernetes.

So the goal here is just to make sure that Istio and things like it, because there are other service meshes out there, can work well within the framework of Kubernetes, and that we give them all the affordances that they need out of the API without being too opinionated about which one you're using. That said, those APIs are decent APIs. And if you can decouple the API from the implementation, there's a lot that we can borrow.

ADAM GLICK: What does it feel like to have built something so influential and have that as part of your legacy?

TIM HOCKIN: It's a little overwhelming. It's success beyond wildest imaginations. When we started this thing, I thought that maybe we would help build some cool cloud products. To watch the fervor around it, to go to these KubeCons and see literally thousands of people there to learn about it, and talk about it, and share what they're doing is mind blowing and overwhelming and super fun.

Very stressful, also. I try to take very seriously the importance that people are putting on this stuff. And I try to read everything I can find in blogs, and Reddit, and whatever of people who are having their experience stories with Kubernetes. And try to learn from the good and the bad, and try to make life better for these people.

CRAIG BOX: In your spare time, you're also the official Kubernetes logo designer. Do you have a favorite of all of the variations that have come out?

TIM HOCKIN: A favorite of all the variations? So we try to do a fun new variation for each developer summit. And the event changes names over the years. But the intention is still the same. So we try to do a one off logo for every of those. Of my favorites, I think probably the pixilated one or the brush strokes one are my favorites. The pixilated one, I originally wanted to be LEGO.

I wanted to try to draw it like LEGOs. Turns out, it's really hard to get all the shadows and stuff right on all the little pegs. So--

CRAIG BOX: Could you have just built it in LEGO and taken a photo of it?

TIM HOCKIN: It's very hard to translate a photo into a t-shirt, especially when you've only got three colors to work with. So they're all designed sort of by hand. The brush strokes one I thought was interesting. Because we took some real photos of brush strokes and down sampled them to one color. And I thought that came out pretty nice.

CRAIG BOX: We'll put some pictures of those shirts in the show notes. Tim, thank you so much for joining us today.

TIM HOCKIN: Thanks for having me.

CRAIG BOX: You can find Tim on Twitter, @thockin, and all over the Kubernetes mailing lists.

[MUSIC CHIME]

ADAM GLICK: Thank you for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can reach us on Twitter @KubernetesPod or reach us by email at kubernetesPodcast@google.com.

CRAIG BOX: You can also check out our website at KubernetesPodcast.com, where you can find transcripts and t-shirt designs. Until next time, take care.

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]