#226 May 29, 2024

A Decade of Kubernetes Contribution

Hosts: Abdel Sghiouar, Kaslin Fields

This episode is the first in our four-part Kubernetes 10 Years Anniversary special! The focus of this episode is on Kubernetes maintainers who have been involved with the project since its early days, and who are still active today. Featuring guests: David Eads, Davanum Srinivas (Dims), and Federico Bongiovanni.

David Eads is a senior principal software engineer at Red Hat. He started contributing to Kubernetes before v1 and now serves as a sig-auth tech lead and sig-apimachinery tech lead and chair.

Dims is a principal engineer at AWS, long term contributor to Kubernetes who served in multiple committees for the project. Today dims is in the Technical Oversight Committee or TOC. Welcome to the show Dims!

Federico Bongiovanni is an engineering manager at Google. He started using Kubernetes in the early days at a previous company, and became a contributor about 6 years ago when he joined Google. Today, he’s a Co-chair of SIG-APIMachinery. Welcome to the show! Would you like to tell us more about yourself?

Do you have something cool to share? Some questions? Let us know:

News of the week

Links from the post-interview chat

ABDEL SGHIOUAR: Hi, and welcome to the "Kubernetes Podcast" from Google. I'm your host, Abdel Sghiouar.

KASLIN FIELDS: And I'm Kaslin Fields.

[MUSIC PLAYING]

We are so excited to celebrate the 10 year anniversary of Kubernetes, coming up on June 6th, 2024. To join in the fun, we are doing a series of four special episodes diving into the Kubernetes journey over the last decade, and looking into the future. This episode will feature active leaders in the contributor community who have been around since the early days.

Future episodes will feature other prominent community members from the very beginning, new leaders guiding Kubernetes into its second decade, and community members with deep, high performance computing and AI/ML backgrounds who will share how the focus on these workloads has changed for the project.

In this first episode of our 10 year anniversary series, we speak to SIG chairs and longtime contributors Davanum Srinivas, a.k.a. Dims, David Eads, and Federico Bongiovanni. But first, let's get to the news.

[MUSIC PLAYING]

ABDEL SGHIOUAR: Issue 1.22 was released. In this new major release, Ambient Mesh was moved to beta, Istio core APIs moved to V1 from V1 beta 1. The gateway API support is now stable and a lot of other features have been added. Check the release details in the show notes.

KASLIN FIELDS: The Gateway API Project released version 1.1. Several features have graduated to GA in this release, notably service mesh support, gRPC route, and many more.

ABDEL SGHIOUAR: Traffic labs announced traffic 3.0. The new version of the cloud native proxy introduces support for Wasm, OpenTelemetry, Spiffy, and Kubernetes gateway API.

KASLIN FIELDS: Microsoft Build, Microsoft's largest developer conference, took place in Seattle, Washington, May 21st to 23rd. Many of the announcements from the event centered around Copilot, Microsoft's AI offering. Announcements included a new dot NET cloud native development stack called dot NET Aspire, a new version of C sharp, and many more.

ABDEL SGHIOUAR: The Kubernetes birthday bash party at the Google Bayview campus is coming up fast on June 6th. If you can't make it to the party in California, there are over 50 Kubernetes regional events happening around the globe. Check out cncf.io/kubernetes to find a 10 year anniversary event near you.

KASLIN FIELDS: And that's the news.

[MUSIC PLAYING]

Today, we're speaking with David Eads. David is a senior principal software engineer at Red Hat. He started contributing to Kubernetes before V1, and now serves as a SIG Auth tech lead and SIG API machinery tech lead and chair. Welcome to the show, David.

DAVID EADS: Hi. Nice to be here.

KASLIN FIELDS: So first off, we're going to start off with an icebreaker because a little behind the scenes know how for everybody-- in our interview planning docs, we actually have an icebreaker segment, and I usually forget that it exists.

[LAUGHS]

But I'm going to try to use that more often.

DAVID EADS: You know, it's just because mine is the most interesting.

KASLIN FIELDS: It is. So, please.

DAVID EADS: One of my favorite stories is one where-- it's not even software related, but I just love building things, and I have for years, and years, and years. And so, when I was a teenager, I decided I was going to build a boat. And this is dating myself just a little bit, but it was pre that time where you just look up like, how to make a rowboat online. And so, I had a picture of a rowboat from a cartoon, and I was going to build it in my backyard. And--

KASLIN FIELDS: This is going to go great.

[LAUGHS]

DAVID EADS: I figured out how I could bend my wood using these trees that I had, because it was the tools I had available. I had to make it work. And so, I bent this boat. It looked beautiful. I got it all together, spent hours and hours putting this thing together.

And then, finally that evening, I go to show it to my dad for review. And I think, he's going to be so proud of me. He just sort of stares at it for a while and he goes, how are you going to get the tree out of the middle of your boat?

[LAUGHS]

And I had cleverly built my boat with a tree right in the middle of it. And some significant refactoring later, and I had not cut down the tree. I had successfully reorganized my boat and put it back together alongside. But I have gotten better at building things since. But I still just love it, and whatever comes up.

KASLIN FIELDS: That's amazing.

DAVID EADS: Yeah.

KASLIN FIELDS: You lived the dream of I saw a thing in a cartoon, and then I made it. Definitely have had a few of those moments.

[LAUGHS]

DAVID EADS: Yes.

KASLIN FIELDS: Love it.

DAVID EADS: Good time.

KASLIN FIELDS: So did the boat work in the end?

DAVID EADS: Oh, it was a terrible boat. It floated really, really well but because I had like, a picture of a boat and a handsaw, I didn't know that boats had to sit like low, like 10 inches tall. So I made the seat of my boat, like, twice as high as normal. So it was the tippiest thing on Earth, and it had no keel, and it was awful. My future boats were all much better.

KASLIN FIELDS: So you made more boats after this? You didn't get--

DAVID EADS: Oh, my boating misadventures go on and on and on.

KASLIN FIELDS: Wow. Well, that's a whole other episode.

DAVID EADS: Yeah.

KASLIN FIELDS: But today, we are excited to be talking about the 10 year anniversary of Kubernetes. And you have been contributing to Kubernetes for quite some time-- we mentioned in your intro, since before V1. And now you're doing SIG Auth and SIG API machinery. So tell me a little bit about what your journey has been like. How did you start out?

DAVID EADS: Accidentally. So I got hired in 2014 by Red Hat. And at the time I got hired, they were hiring me to work on a Ruby application that was OpenShift at the time. And then by the time my first day came, instead of getting handed like a Ruby manual, they said, "What do you know about Golang? Have you ever heard anything about containers? And do you know anything about this thing called Kubernetes?"

And so I looked and said, "I've heard of Go, but never used it. I don't know what a container is. And Kubernetes just sounds weird." And so it turned out the project was really, really young still. And I-- I mean, I joined-- my first PRs were fixing kubeapi-server error messages and making kubeconfig files a thing-- like kubeconfigs didn't exist.

And yeah, I mean, it went from there with all the pieces that you need after that. So it was, how are you going to make authentication work? So like the very baby steps of SIG Auth, designing RBAC, and things like that, and expanded from there. I've done work in SIG Apps and a bunch of the infrastructure pieces over the years, but API-Machinery is still my favorite.

KASLIN FIELDS: And in that early of days, there weren't SIGs yet. It was just you come in and work on whatever you can find.

DAVID EADS: Yeah. You come in and you work on whatever needs fixing that particular day. It was really wide-ranging. All sorts of stuff that you think of today just didn't exist. So like namespaces. Namespaces did not exist. When I started in the project.

KASLIN FIELDS: And you mentioned kubeconfigs didn't even exist, I don't know that I even remember that.

DAVID EADS: Yeah. kubeconfigs didn't exist. One of my favorite issues of all time is one where Eric Tune, he's stepped out for a while now, but Eric Tune had an issue and said "kubeconfig merging causes hair loss" as title of issue. And yeah, it was a heck of a thing to fix.

KASLIN FIELDS: Oh, I've got to find that and put that in the show notes.

DAVID EADS: I love it. So yeah, that came in. And just all sorts of core primitives people take for granted today just weren't there.

KASLIN FIELDS: Yeah. Just creating the baseline, even.

DAVID EADS: Yeah. I mean, I would describe that-- those early days as the finding purpose days of Kube. Like how do we get to a point where we can run something useful that people want to use? How do we get to a point where we can scale it to at least 1,000 nodes before it falls over? How do we handle basic security principles to make this thing go? And it was really rapid and fast.

KASLIN FIELDS: Interesting. So you come into Red Hat new, bright-eyed, bushy-tailed. And they're like, congratulations, now you're a container person.

DAVID EADS: Yes. It was exactly like that. And in particular, they came up with like, we need to have an authentication and authorization story, and we need to be able to add types and figure out how to make all that work. Oh yeah, And we don't have a CLI that works really well yet, so see what you can do there.

KASLIN FIELDS: So you dive right into these kind of core fundamentals with a little bit of guidance on direction. And from there, what is the path like? What were some of the most pivotal moments so far in your Kubernetes open source career?

DAVID EADS: Yeah. So thinking through that, are you interested in the tech aspects? Are you interested in the ecosystem growth? Are you interested in personalities that were there back then? Where would you like to go?

KASLIN FIELDS: Yeah, those are some interesting potential paths here. It is the 10-year anniversary, so I imagine some of the folks listening today are interested in the history of where the project came from. Probably a lot of folks are interested in that-- hopefully everyone. And I imagine some folks out there are interested in getting involved in the project themselves. So with that framing of those folks might be listening to us today, what would you say are some of the pivotal things that you've experienced.

DAVID EADS: So there's that first stage from about pre-1.0 till-- I'll call it maybe 1.3. And during that time frame, we were trying to figure out what we were going to build, how people were going to basically use the thing. And that's when you see the development of namespaces and services. And as you start releasing, you get to see more things around workload. So you see replication controllers, deployments, jobs. They're evolving during that time.

And we pretty rapidly got a core set of use cases that the really devoted people could actually run and try to use. And then from there, I'd say from 1.3 to 1.9 1.10, there were a lot of scaling issues. Basically, people wanted to use it, they were willing to put up with it, and you saw tremendous forces placed on us to hit scale numbers.

And so, places like API-Machinery and SIG Apps, we ended up looking at it and saying, what do we need to do to make ourselves more efficient so that we can run with less than 50 gigs of memory or something like that? And all sorts of projects evolved in that time. etcd evolved. A lot of the apps pieces evolved, the clients evolved. We developed things like Protobuf. And so, we found that need and we pushed forward on that.

And then, from about 1.8, maybe 1.8, all the way up through, I would say, 1.21, we were in a stage where we were trying to foster an ecosystem. It was back very early that Brendan Burns had identified a need to have people be able to plug in their own kind of resources, to say, I have this new type of API, how do I get it on there because I need to coexist with you in your system?

And that insight was great, but the implementation needed a little bit of work. So we invested a tremendous amount of time. Because it's not just that you have to be able to find a type. You have to be able to define a type that works like all the other types in the system, and that's why today, you can install a CRD and suddenly you have access to it. That's not an accident.

That's RBAC being built, new features being added, CRDs working with the same core implementation to be able to have all the pieces come around it and work with a working client ecosystem and run. Things like admission webhooks developed during this time, so that it was possible to extend this system.

And we were really successful there. You looked at KubeCons at that time and you were seeing all sorts of projects popping up trying to build on those concepts. And that's where you get all sorts of operators managing various pieces, different apps running in cluster extensions being possible. And that was a really exciting time.

And then, it got to a point where we've got all this stuff to add, and now we're in a stage where it's not working super well together. And it's not the case that everyone has a PhD in Kubernetes. And so, I would say somewhere around that like 1.22, you can see us really trying to say, OK, how can we get to a point where our ecosystem can actually produce things that our average kube admin can run and actually manage?

So there's been a lot of attempts at simplification there, reducing of operational complexity. And you can see that in new features that are things like validating admission policy, which eliminates network dependencies and makes things more reliable. Or Server-Side Apply, trying to make it so that clients can coexist on different APIs resources and not fight with each other. And I think those are going to be-- well, they are really valuable things, and people are going to start leveraging them more and more now that they've gone GA.

KASLIN FIELDS: And I think what you were saying at the beginning there is, you've shown the whole journey, really, from the beginnings to the really recent maturation of the project, and something that you mentioned there really reminded me of Gateway API.

When you were talking about those early days when you were just trying to figure out use cases and build the things that those use cases would need, the story that I heard around Gateway API is they built the ingress object in Kubernetes, and that was one of those cases where they came up with a use case, and they built what they thought it needed, and then they realized later on that it wasn't quite exactly what that use case really needed. And so they ended up redesigning it into Gateway API.

DAVID EADS: That's an amazing example. So if you look at it, a lot of those extension pieces that I described that we built for the ecosystem to use, they didn't exist when ingress was created. And so you fast forward and then you look and you say like, well, why is it built this way? It really needs to work like that. I mean, sure, it's obvious now, but it wasn't obvious then.

And you could even see that carrying forward today. So one of the most popular talks at KubeCon in Paris was the ReferenceGrant talk. Like, that room was packed. And they were sitting there trying to figure out how can we solve this problem? It evolved out of the Gateway API Project where it was a need to say such and such has to have access to this thing and I have the power to grant that. And I'm really looking forward to what that unlocks once we finally develop it.

KASLIN FIELDS: Yeah. There's a lot of chicken-and-egg problems throughout Kubernetes' history. Like we think we'll need this, so we build this, but it was kind of--

DAVID EADS: Definitely.

KASLIN FIELDS: --one first, and not the other.

DAVID EADS: Definitely. And you don't always predict it accurately. There's some where you predict it and it works out pretty well. Like RBAC is a great example of one of those things where we did a great job figuring out what people really needed in that first go-round. And it has aged incredibly well. I mean, it's still the backbone of authorization after nine years? Eight years? And has been extended to work with new concepts.

KASLIN FIELDS: Iteration is one of those core tenants of technology.

DAVID EADS: Yeah. And I guess on the other side, you have something like namespaces where namespaces have been incredibly valuable for what they are, but I was there when namespaces was made. I remember arguing with Tim Hockin, and Tim was saying, "We have to have hierarchical namespaces. We need namespaces nested in namespaces so that our project will age well."

And I remember sitting there and arguing against him and saying something like, "Tim, we can barely handle one level of namespaces. If we develop two levels of namespaces, it's just never going to work. We'll never be able to get this thing working right."

And I can look back now and I can say, yeah, Tim was right. I wish I had hierarchical namespaces. You can get a certain degree of depth with one hierarchy, but I wish we had been able to come to an agreement about, are we going to have associated sets of projects that may or may not overlap?

Or are we going to have a hierarchy of them? And how could we build that. And the things we might have been able to build on top of that system would be really, really cool.

KASLIN FIELDS: And that is a really nice anecdote to show how things worked in the early days and how they've evolved. Are there any other favorite anecdotes you have from your time on the project?

DAVID EADS: I guess I do have one, actually. There was a time back when the project was a lot less formal. And so, I needed to communicate an idea of security boundaries, and it was a discussion with myself and Eric Tune and Daniel Smith.

And I didn't-- I had the idea in my head, I didn't have the software right there, and I grabbed my kids crayons and I drew a bunch of pictures in crayons. And I took a picture of them on the table and I uploaded it to the GitHub Issue. And it started being referred to as the Crayon Issue. "We have to resolve the crayon issue!"

And it was great. It communicated the idea. It was just fun to get to do it that way. And it's not that we couldn't do that now today, but today, there's a lot more structure around it.

You'll generally create a presentation, put it into a cap and make nice diagrams. But I will say, I will give bonus points to any contributor who sends me a cap with a picture drawn in crayon. Guaranteed review from me.

KASLIN FIELDS: I love it. I've actually run some workshops internally for my colleagues to talk about how useful it can be to just diagram something out with just crayons or whatever you have available to you because having something visually can help connect the dots when you're trying to explain an abstract technical concept.

DAVID EADS: Yeah, I love it. And it comes in, it's got easy colors, it's easy to draw-- my five-year-old was doing it when I stole her crayons to make my picture. And it's so much faster in many ways than trying to build that perfect diagram. Like, let's get the basic idea here and let-- the actual shading and let's talk.

KASLIN FIELDS: And so, we've talked here about a few different early design decisions and how they've aged, but are there any particular early design decisions that you were involved with? And how do you think those have aged?

DAVID EADS: So I think RBAC aged particularly well. The design originated as a downstream project. It was on the Red Hat side. And someone from CoreOS-- separate companies at the time-- said, "Hey, I really need this in Kube. Could I take it forward?"

And when we took it into Kube, it was basically a straight port. But as we started looking at how people needed to build on top of these roles that we had, roles like Editor and View, we discovered that when you have a mutable-type system where custom resources can be created at any point in time, you need to have a way to have that same flexibility in your permissions model. And so, we were able to extend that design really nicely with a concept called aggregation.

Now, again, we weren't perfect. So looking back on it, we might have implemented that differently, but that core concept is a great example of us making a decision at the beginning, it carries forward now. And I know there's spots where people look and say, "Gee, I just wish I had," and the most frequent one is, "Gee, I just wish I had negative rules." And that was highly contentious. Like, "I want you to be able to do everything except."

And at the time we were telling people, no, we can't give you that negative matching rule. There was a lot of frustration. And that was a highly contentious decision. And yet, if we had done that, the ability to then extend the model for CRDs to say, you're adding a CRD, go ahead and add your permissions now, would be far more challenging because instead of being strictly additive, if you were adding something important that you didn't want everyone to be able to touch and someone had a rule of everything except, well now you've accidentally granted access to that thing.

And so, it's an API everyone uses. It's an API many people think they want negative matching rules on, but if we had done it, it would have made that fostering the ecosystem stage when we're trying to get more and more CRDs in there and make it easier to use practically impossible.

KASLIN FIELDS: Interesting. I feel like it's really hard for folks on the outside to see those kinds of decisions and their effects. If you're not following those specific things for a long period of time, it's hard to see how something so intuitive-- I feel like a lot of people would naturally describe the roles that they want using that negative terminology, but it's hard to think through all of the implications that such changes might have. And that's why a lot of changes in Kubernetes these days take a very long time.

DAVID EADS: That is actually exactly why. We're sitting there and we're saying, "Where is this likely to go?" When we're going to enable some new feature to make node extensions easier, is this choice that we're making here going to make those node extensions harder to build, and thus, limit us in the next AI enablement stage? And we're trying to avoid problems like that.

KASLIN FIELDS: And so, RBAC is very closely tied with your work, I assume, in SIG Auth, since it's role-based.

DAVID EADS: Mm-hmm.

KASLIN FIELDS: Is it authorization control or authentication?

DAVID EADS: Role-based access control.

KASLIN FIELDS: Access control. There we go. Which I think is very closely tied to both authorization and authentication.

DAVID EADS: Yeah.

KASLIN FIELDS: But I feel like people don't know what SIG API Machinery is. So could you talk a little bit about the work that you're doing today in both SIG Auth and SIG API Machinery?

DAVID EADS: Sure. I guess I would say the API machinery covers the ability to serve APIs out of etcd and the client libraries that use them for manipulation. So you can think of it as providing the basis for most of the controller code that you have. So things like work queues and informers, if you go down to that depth, as well as the basic client libraries that you would use to interact in Go.

And so, when we look at what we need to do there, we're looking at things like, how do we enable novel kinds of APIs and extensions to be developed? And so the most current things you'll be seeing are things like Server-Side Apply where we're making it easier to own a particular set of fields. And you'll see other pieces like field selectors for CRDs where how do I select the set that is important to me?

And then, the combination of those two things actually works really well with-- you can also extend it to Auth and be able to say this user can see the CRD that has this kind of field set. And if you have the combination of all of that, it becomes possible for you to do things like per-node extensions.

You're able to look and say, I have an extension, it is on this node. It can manipulate only things related to this node in my custom API type, and here is how we have the pieces to do the permissioning and access on that. And so those two SIGs in particular, they're really sympathetic in terms of power you get and then how you control that power.

KASLIN FIELDS: Interesting. So API Machinery is the world of Kubectl and any kind of API you might use to interact with Kubernetes.

DAVID EADS: API Machinery is more the aspect that Kubectl talks to. So we own things like the transport, and SIG CLI owns Kube Control. Back in the day it was the same people, but we've since branched off. And so, for instance, API Machinery is working on something called streaming list, and it will vastly improve memory utilization on an API server.

We have to develop both the server-side of that. So the server can do it and the client side of that, so the clients do it. And if we do it right, then Kube Control will automatically do it, and therefore, you will end up with overall reduced load. So it's a lot of second-order stuff. API server develops the feature, Auth makes it possible to control access to it, API Machinery develops all the clients that can then access it, and then those clients get used everywhere you're going to need them.

KASLIN FIELDS: By like things that SIG CLI creates.

DAVID EADS: Yeah. CLI and Apps are the two biggest consumers that we have. Well, sorry, I forgot about Node. I like Node.

KASLIN FIELDS: All of the SIGs are just tied together.

DAVID EADS: Yeah. We do centralize right around this point because everyone has to authenticate, everyone has to be authorized, and everyone has to have the right constructs and efficient ones to be able to access the server.

KASLIN FIELDS: And I think a lot of folks out there, if you're a user of Kubernetes, you might not be familiar with the way that Kubernetes contributors organize their work. So hearing about SIGs, the Special Interest Groups, might be a new thing to you. But I think when you dive into it, it makes a lot of sense. We need to do all of these different things, and each one of these pieces is too big for any one person to handle all of them, so they get split up into smaller chunks, and that's how SIGs work.

DAVID EADS: Yeah. That's exactly right. And we try to provide the value that the SIGs want. You have other examples where it's things like SIG Auth wants to be able to have more efficient encryption in etcd.

And to do that, they want a new API Machinery feature. And to do that, they end up getting direction from SIG Arch about here's the list of things you need to do if we're going to change this particular piece of surface area that will affect every SIG out there. And so, go forth and do, but yes, it's split up to achieve those things.

KASLIN FIELDS: Go forth and do, the contributor motto.

DAVID EADS: Indeed.

KASLIN FIELDS: And so, this is the 10-year anniversary of Kubernetes that we're coming up on here. So a last question for you. What would you like to see next for Kubernetes?

DAVID EADS: I know you are going to get tons of answers that say, I want to get API enablement-- or AI enablement, AI enablement. And don't get me wrong, I care about that, too. But I think that there's a lot to be said about, like, how do we look and achieve that? And I think on that journey we, need to focus and maybe two big areas.

We're going to need to have a way to have per-node extensions. And you can see this in spots where you get specialized hardware, and that's where the fan-out happens, on the node, where they have to handle this hardware and that hardware and the other hardware. And it has to be somehow leverageable with a higher-level construct, and we haven't yet built that, how that will work.

The other one that I see is trying to establish new API and controller patterns that don't yet exist. They are different than what has come before, but we don't know how different yet. And so, we need to be extending into that area and saying, what new core machinery constructs do we need in order to support the controller constructs that are going to have to exist in order to know what the authorization constructs will be? To be able to have all those pieces come together. And you can see spots where that's starting today.

KASLIN FIELDS: I think that's a beautiful description of something that I think is really hard for folks to grasp, is that there's all this AI hype happening and all of us are having to shift and become AI experts and all of that, but AI is very interesting in that it really goes all the way down in the hardware.

And so, with a distributed system like Kubernetes where you have a lot of hardware and you're trying to run applications across it, it's a great use case, actually, for Kubernetes-- there's a lot of synergy there, but it does mean that there's a lot of considerations for the project to make that it hasn't had to focus on as intently before. Like you were saying, we need to think more about how we expand the types of hardware that are available on the node and how the applications use those.

And so, there's all of these-- and the extensions, the controllers, the CRDS, how are those going to have to change to enable these new very hardware-specific types of workloads I think is a really interesting problem, actually, for the project.

DAVID EADS: It really will be. And when you look and think about the progress that's going to get made there. If we make progress to support things like DRA, then we're going to get all sorts of unexpected intersections that are going to benefit on the side. So if we make a particular way to handle the APIs for DRA to work, great.

It will probably benefit every CNI and CSI that we have as well. For those that don't know, that's the networking interface and the storage interface. Those are per-node today, and they're somewhat rough around the edges when you go to use them as a user.

If we manage to solve this problem for DRA, we're almost certainly going to make things like that easier and better, both for the people writing the workloads and for the people running the clusters.

KASLIN FIELDS: And that's also going to be important for another very popular topic in the Kubernetes world, which is multi-cluster. Even more expansion, scale is still an issue today. So these AI workloads and the things that will have to do to run those effectively could actually have effects beyond those AI workloads to help the project run better and other use cases.

DAVID EADS: I think you're probably right there. So the scheduling constraints that end up getting built, there's going to be brand-new scheduling models that are going to have to get created to say, how do I know when to run this? How do I know when to preempt and how to preempt? And figuring out where those land in between single-cluster and multi-cluster.

And how you can actually decimate the information so that the scale doesn't crush you. You have so many nodes, but how do you not have 10,000 different items for the 10 things that you have on every one of your 1,000 nodes? Figuring out how to abstract that out properly is going to be really cool.

KASLIN FIELDS: Awesome. I'm looking forward to what the future holds. Thank you so much for joining us today, David.

DAVID EADS: Yeah, it's been a great time. I love talking to you.

ABDEL SGHIOUAR: Today, we speak to Dims. Dims is a principal engineer at AWS, a long-term contributor to Kubernetes who served in multiple committees for the project. And today, you are in the Technical Oversight Committee, or TOC. Welcome to the show, Dims.

DAVANUM SRINIVAS: Thanks a lot. Happy to be here, Abdel.

ABDEL SGHIOUAR: How is it going? All good?

DAVANUM SRINIVAS: Yeah. Everything is awesome.

ABDEL SGHIOUAR: Nice, nice. All right. So we are doing this as part of the 10-year anniversary for Kubernetes. We are talking to some awesome contributors, newer-generation contributors. Before I get going, are you planning to go to the birthday bash party in the Mountain View?

DAVANUM SRINIVAS: I wish. I live in Boston, though.

ABDEL SGHIOUAR: Oh, wow, that's far for you.

DAVANUM SRINIVAS: Yeah. The wrong time zone to be in the party.

ABDEL SGHIOUAR: All right. Well, that's too bad. Cool, cool. I wish I could be there, but I'm going to have to do two different meetups, actually, for the 10 years.

DAVANUM SRINIVAS: Oh, wow. OK. Yeah, I think I already have the T-shirt that Tim Hockin designed, so I think I'm all set.

ABDEL SGHIOUAR: Good, good. We all know that swag is important.

DAVANUM SRINIVAS: Absolutely. That's the lifeblood of the community.

ABDEL SGHIOUAR: That's why we're doing this, right?

DAVANUM SRINIVAS: Yeah.

ABDEL SGHIOUAR: All right. I'm just kidding, of course. So, let's get going. You've been part of this community, actually, the Kubernetes community since the early days. How did you start out? What was your origin story?

DAVANUM SRINIVAS: I've been doing open source a while, floating between different projects and different foundations. So my introduction to Kubernetes was when I was working on OpenStack. So when I showed up in the community here, I was labeled the OpenStack guy. Nobody calls me OpenStack guy anymore here. And now I've been adopted by this community, so that feels good.

And the introduction was like, how do you best run Kubernetes on top of OpenStack? That's how I got started. There were a couple of projects that we started in OpenStack itself around like how do we stitch things together, how do you make it run better.

And like one of the first things that I ended up doing was doing a cloud provider for OpenStack. And that's how I got to know about the entry cloud provider, and-- I'm going deep into the weeds here.

ABDEL SGHIOUAR: Please do.

DAVANUM SRINIVAS: That was my introduction to, like, OK, this is a vibrant community and they are thinking about extensions, they are thinking about lots of scenarios that I love doing so. And naturally, slowly, I drifted deeper into the code and the community and having a lot of fun doing so.

ABDEL SGHIOUAR: I see. Is that what gave birth to eventually OpenShift at some point?

DAVANUM SRINIVAS: So OpenShift has been there right from the beginning even before I was part of it. I've never done OpenShift or Red Hat, per se. But OpenShift is more about an opinionated distribution of Kubernetes itself.

ABDEL SGHIOUAR: Got it.

DAVANUM SRINIVAS: OpenStack was essentially a private cloud on-- using virtual machines and things like that.

ABDEL SGHIOUAR: Yeah, yeah. I worked with OpenStack in the past. I was just thinking like-- because a lot of times people, when they talk about OpenShift, it's usually, OK, that's just OpenStack plus Kubernetes, but as you said, it's like more--

DAVANUM SRINIVAS: There are still a lot of people doing integration work between the two communities. The cloud provider for OpenStack is still there. The cluster API provider for OpenStack is also very vibrant and Red Hat. Folks are working on both those things as well. So it's all good.

ABDEL SGHIOUAR: Yeah. It's also because OpenStack is quite popular in the telco world, so that's what makes the project still going strong, I guess.

DAVANUM SRINIVAS: Correct.

ABDEL SGHIOUAR: All right. So in this whole time that you've been involved, was there any pivotal moments for you? Something that was an aha moment?

DAVANUM SRINIVAS: Well, one thing that I can call out is when Ryan Grant asked me to be part of the SIG Architecture leadership team. So I was like, OK, I'm getting the keys to the kingdom. Though it is a lot of responsibility in terms of herding the cats and making sure that all the-- it turned out to be a good fit for me because I always think of myself as a maintainer mode. Like, I'm not super into-- like, I can switch gears when needed for sure, but at heart, I'm a maintainer. Like, I want the codebase to be clean, and I don't want too many moving parts, and reduce the dependencies and make sure things are working in the long-term.

So that is where I operate best. Running the codebase and trying to get people to work on the right set of things. So that's where my heart is, and that turned out to be very good for me here.

ABDEL SGHIOUAR: Got it. So then speaking of architecture, and we know that's been part of SIG Architecture means making decisions, was there any early design decision that you were involved in? And how do you think that aged?

DAVANUM SRINIVAS: Yeah, I told you about the cloud provider stuff, right? So I've been involved in that work for ages. Like, if you go back to the initial issues that were open and the first cut of the caps and whatnot-- actually, the work started even before we had KEPS. So that's a whole another story. There was a point when there was no enhancement proposals. Like, can you even believe it?

ABDEL SGHIOUAR: So anybody can just write a pull request?

DAVANUM SRINIVAS: Yeah. Anybody could merge code. It was the Wild, Wild West. But yeah, I think that was one of the things that the community should be very super proud of. Right at the beginning, everybody-- it's a monolithic codebase, and a monorepository specifically.

So everybody wanted to land their code, and all the code went into the same repository. And so, everybody had their own vendored libraries that they wanted to bring in. And there were duplicates and whatnot. And the Kubelet was becoming too big and so on and so forth.

So we went through the process of, like, OK, can we remove all the cloud providers, all the credential providers, and make sure that everybody can ship at their own pace? Like, they don't have to depend on Kubernetes to ship, they can make their changes, fix CVs, do their own patches, and be awesome in their own space and still be part of Kubernetes.

ABDEL SGHIOUAR: Yes.

DAVANUM SRINIVAS: A lot of the decisions were intentional, like the CSI drivers, the CRI drivers, and the external cloud providers. So I think that has worked out really well for us in terms of exploding the community as such. Not just the Kubernetes.

If you go to CNCR projects there are, like, what, 140, 180 projects. And a lot of them have, one way or another, a relationship with Kubernetes. And all that is because of the decisions that were taken here even before my time, way before my time as well.

ABDEL SGHIOUAR: Yeah. So I know that we said we want to reflect about the past, but now that you are talking, I remember the tweet, the last code removal. 1 million-plus line of code removed?

DAVANUM SRINIVAS: Yes.

ABDEL SGHIOUAR: And I think you tweeted it as in, this is like one of the best way you can contribute to a project. You just contribute to pull request that removes code.

DAVANUM SRINIVAS: Yeah. I mean, just to be clear, it is vendored code.

ABDEL SGHIOUAR: Yea, yeah. Vendor code, vendor code.

DAVANUM SRINIVAS: But the actual code that--

ABDEL SGHIOUAR: No. Kubernetes still works, right?

DAVANUM SRINIVAS: Yeah, absolutely. Kubernetes still works. You don't have to worry about it. It's just that that was a really good cleanup for sure. So over a period of time we did a bunch of things. Like 1.27, we cleaned out the AWS cloud provider. And then--

ABDEL SGHIOUAR: Yes.

DAVANUM SRINIVAS: And then 1.28, we did one more. And then so over a period of time-- like the VMware cloud provider. And then the Azure cloud provider, that was the one that went in just before this. So like over a period of time, every release, we were able to clean some of these things up. And then finally, the big one has been the Google cloud provider, which you saw the tweet about.

ABDEL SGHIOUAR: Yeah. So that's the 1.30, basically. The last version?

DAVANUM SRINIVAS: Yes.

ABDEL SGHIOUAR: Cool. Nice, nice. Yeah, I just remembered that I was looking at-- I was on Twitter and I saw your tweets and I found this funny.

DAVANUM SRINIVAS: Yeah. And it's actually going to go into 1.31.

ABDEL SGHIOUAR: Ah. 1.31. OK, OK. So the next release. OK, cool.

DAVANUM SRINIVAS: But there is no difference to people using the Kubernetes itself. Like most of this is around like how cloud providers integrate with their distros and things like that. And we have-- out of three cloud providers for everything already, and everybody has switched to it a long time ago.

ABDEL SGHIOUAR: Yeah.

DAVANUM SRINIVAS: It's just that it was tech debt that was piled up and we had to clean it out over a period of time, and we were able to pull that off.

ABDEL SGHIOUAR: Cool. Awesome. Awesome. So can you tell us any anecdote, one of your favorites, over these 10 years of involvement?

DAVANUM SRINIVAS: The thing that I really, really love is how we launched the community in India. The contributors that are coming into Kubernetes, we have a lot of students coming in. We have a lot of people in important positions within the community from the India and from the Indian subcontinent, not just India.

And that all started when Nikita came and joined us through Red Hat as an intern. I've always been bothered by the fact that whichever open source foundation or project that I'm in, we don't really have an Indian identity, so to say. And I was like, OK, what can we do different here?

So we started-- a lot of people don't even know this. Like there is an InDev Slack channel, which is very popular with folks from India. And when we see folks asking questions and things like that, we add them into the InDev channel. It's an old public channel.

ABDEL SGHIOUAR: Yeah.

DAVANUM SRINIVAS: You can speak if you want. So we get them engaged. Like here is a documentation thing that somebody has to do. Here is an issue that you can work on. And also, to self-support each other.

So if somebody knows a little bit more than somebody else, then they should be able to help the other person coming in. So you set up that kind of a structure where you are not the single point of failure, but you are setting up a community for success, so to say. And you can see how many KCDs are going on in India.

ABDEL SGHIOUAR: Of course, yeah.

DAVANUM SRINIVAS: And we have our first KubeCon in India in December! I'm so, so, so happy to see that happen finally.

ABDEL SGHIOUAR: I was about to bring that up, but before, I think that I like the way you described it, that if you want to increase participation from other parts of the world in an open source project, you have to be intentional about it. You cannot just expect it to happen.

DAVANUM SRINIVAS: Absolutely. It doesn't happen by itself.

ABDEL SGHIOUAR: Yeah. You have to be intentional. You have to have-- and also, you have to have role models. You have to have people who get involved first and then--

DAVANUM SRINIVAS: Yes.

ABDEL SGHIOUAR: They inspire and pull more people.

DAVANUM SRINIVAS: Yes. I'm an introvert by nature, but I have to get out of my skin to actually go do some of these things. But over a period of time, it works out.

ABDEL SGHIOUAR: Nice. And then, KubeCon in India for the first time this year, I assume you are going.

DAVANUM SRINIVAS: I'm going. Like-- it doesn't matter one way or another, I'll be there.

ABDEL SGHIOUAR: Awesome. I think-- I've been talking to part of our team in APAC in the region, and I've been asked if I could go, so there is a possibility I will go. We'll see.

DAVANUM SRINIVAS: Thank you! And that'll be awesome!

ABDEL SGHIOUAR: That'll be awesome.

DAVANUM SRINIVAS: We'll have a lot of fun.

ABDEL SGHIOUAR: I've never been to India. We've been very excited. So I travel a lot. We were talking about this just before we started. I travel a lot. I do-- and when people ask me what I do, I tell them I travel to run, eat food, and then maybe there is a talk somewhere.

[LAUGHTER]

DAVANUM SRINIVAS: Yeah. And that's, by the way--

ABDEL SGHIOUAR: That's like third on the list of priorities of things to do. All right.

DAVANUM SRINIVAS: Yeah.

ABDEL SGHIOUAR: Cool. So what are you working on these days?

DAVANUM SRINIVAS: It's mostly cleanup, especially the cycle, we need to wrap up a few more things. But one important thing that we've been working on for the last year and a half is setting up alternate infrastructure on different cloud providers. Like, for example, we always run things on Google-- GCP platform. We use everything that is available on the Google Cloud platform.

But when the credits came in from AWS early last year, we started setting up parallel CI/CD systems and making sure that when we are doing container images, it gets shipped local to the people who are trying to pull the container images from. So that has been like awesome.

ABDEL SGHIOUAR: Nice.

DAVANUM SRINIVAS: Like especially in terms of reducing the dependency on specific people, specific teams, and specific vendors. And also, so the main thing that I worry about these days is around sustainability. How do we exist as a project for the next 25 years, 50 years? That is the kind of thing that we need to think about. This doesn't include just technical aspects, but also the people aspects.

ABDEL SGHIOUAR: Yeah.

DAVANUM SRINIVAS: How do we scale ourselves? How do we provide continuity? How do we make sure that the next generation of leaders are coming up? Things like that. So LFX Mentorship, SIG Contribx, and SIG K8s-infra.

ABDEL SGHIOUAR: Yeah.

DAVANUM SRINIVAS: So that's where a lot of these-- work that I pay attention to and help out when I can.

ABDEL SGHIOUAR: Nice.

DAVANUM SRINIVAS: So yeah, that is very important to me at this point.

ABDEL SGHIOUAR: And I just wanted to mention that, I think the SIG Test Infrastructure would probably be a good place for people who want to see how a large-scale projects deal with CI/CD and--

DAVANUM SRINIVAS: Absolutely.

ABDEL SGHIOUAR: --production-like systems, even if it's not technically production, but you don't really have to contribute code to Kubernetes itself. You can contribute code to the thing that supports Kubernetes.

DAVANUM SRINIVAS: Absolutely. So we have two things. One is SIG Testing, and the other one is SIG Kubernetes infrastructure.

ABDEL SGHIOUAR: Yeah.

DAVANUM SRINIVAS: So in conjunction, both of them need to work together for sure. And, yes, people, when they come to us-- so there is some amount of trust building that is involved and some amount of sticking around that we require from the people, and we need to get to know them a little bit.

ABDEL SGHIOUAR: Of course, yeah. Of course.

DAVANUM SRINIVAS: And so, it's a little bit harder to get into, especially on the SIG K8s-infra side, but we've had some really good success. Like we had people like mark, and folks like that who have come around. And like Mahamed Ali , updroid on GitHub. And [? Arnaud Meukam--literally, they are running the things now. They are the chairs.

So from that point of view, we welcome a lot of people from the DevOps background to contribute here. It doesn't have to be code that goes straight into the Kubernetes Repository for sure, like you said.

ABDEL SGHIOUAR: Yeah, that's another aspect of how you could contribute. But also, just learn from it. Like, it's a large-scale multi-million dollars' worth of credit spend on each cloud provider every year, right?

DAVANUM SRINIVAS: Yeah! $4 million this year--

ABDEL SGHIOUAR: It's insane, right?

DAVANUM SRINIVAS: --we'll be spending $4 million across the two clouds.

ABDEL SGHIOUAR: I think if the Kubernetes Project was a startup, it would probably be a unicorn just in terms of how much money it spends on infrastructure.

DAVANUM SRINIVAS: The funniest thing is, people don't even realize that they're using all this stuff for free.

ABDEL SGHIOUAR: Oh yeah.

DAVANUM SRINIVAS: They expect it. And they expect it to be-- download any Kubernetes artifact from anywhere in the world whenever they choose.

ABDEL SGHIOUAR: For free.

DAVANUM SRINIVAS: Like, they want to run any of the containers that we ship on any Kubernetes cluster they have, on on-prem or on any cloud without blinking an eye. And we've spoiled them, literally.

ABDEL SGHIOUAR: Yeah. I think-- I mean, I don't want to go too much into the weeds, but there have been this whole discussions around whether open source projects are actually expected to just deliver an artifact.

DAVANUM SRINIVAS: Absolutely. Like, yeah, I want to go back and say, this is source code, do whatever you want--

ABDEL SGHIOUAR: Build it yourself, right?

[LAUGHTER]

DAVANUM SRINIVAS: Yeah. It costs too much money.

ABDEL SGHIOUAR: Yes.

DAVANUM SRINIVAS: Where are we going to get the money from?

ABDEL SGHIOUAR: Exactly, exactly. All right, cool. So one last question. I don't want to take too much of your time. What would you like to see next for the project?

DAVANUM SRINIVAS: I would like to see new set of leaders showing up. I specifically want people who work across the SIGs, not just in one specific SIG. Don't come here to just do one feature in one SIG and like, yeah, be happy. I want you to go up, and go across, and be part of the community, and lead the community for the future.

Just like we showed up and we picked up some of the work, we would like other people to come and join us in this journey. It's going to be a long journey. All of us have finite lifetimes.

ABDEL SGHIOUAR: Yeah. And with AI and ML, Kubernetes is going to stick around for a very long time.

DAVANUM SRINIVAS: Absolutely, yeah.

ABDEL SGHIOUAR: It's going to be-- there is always more work to do.

DAVANUM SRINIVAS: Yes. You said it.

ABDEL SGHIOUAR: Nice. Awesome. Thank you very much, Dims. Thanks for your time.

DAVANUM SRINIVAS: Thanks a lot. It was great talking to you. And hope to talk to you in KubeCon India!

ABDEL SGHIOUAR: Hopefully. I'll let you know.

DAVANUM SRINIVAS: OK.

ABDEL SGHIOUAR: Awesome. Thank you. Have a good one.

DAVANUM SRINIVAS: Bye.

KASLIN FIELDS: Today, I'm super excited to be speaking with Federico Bongiovanni. He is an engineering manager at Google, and he started using Kubernetes in the early days at a previous company and then became a contributor about six years ago when he joined Google. Today, he's a Co-Chair of SIG API Machinery, along with our other guest, David Eads. And we're so excited to have you on today, Federico. Welcome to the show. Would you like to tell us more about yourself?

FEDERICO BONGIOVANNI: Yes. I first want to say the one that is super excited is me. It's an honor to be talking to you and to be sharing this space with David and a lot of people that I respect a lot. And that I've been working for so many years by now. Sometimes they start feeling like friends more than co-workers.

And I think that is one of the fantastic things about open source, because if you think about it, at some point, it's different competing companies in the abstract that come and work together to make something better regardless who you work for. So that is something I really, really like.

KASLIN FIELDS: That's definitely a big benefit of open source that we talk about all the time, is that you have this community, this network around you that isn't just the folks that work with you at your company, but beyond that.

FEDERICO BONGIOVANNI: Yeah. And certainly, we can talk a little bit more about this, but I will say that one of the things that I've always hold myself accountable, and I learned that from the people that I work with, is the project comes always first regardless the company that you work for. So there is no pressure that anybody can put up the management chain of my company that is going to make us bend any rule or do something that is not a benefit for the entire community.

So I don't like speaking about me a lot, but for the sake of the story, I was working at a different company eight years ago. The company's called MuleSoft. MuleSoft is an enterprise service-plus, but I mention this because they also have an open source version of the commercial enterprise product that they have, so I was tied already with the open source community.

And then, at some point, we decided we run a large cloud Infrastructure. And we decided to explore Kubernetes. This is early days of Kubernetes, 2016, '17. And it was probably when all the hype was starting for Kubernetes. So we were one more of the fans of the product and learning how it works, et cetera.

And then, one day, I get a phone call from Google saying, "Hey, we saw your profile. We have a couple of options for you." And in honesty, they sent three different projects, but I only remember one.

There is the Kubernetes project. I was like, "OK, what do you mean Kubernetes? Like doing what with Kubernetes?" And recruiter was like, "Well, working in Kubernetes." I was like, "OK, where do I go?

KASLIN FIELDS: Where do I sign up?

FEDERICO BONGIOVANNI: Yeah, exactly.

KASLIN FIELDS: Love it.

FEDERICO BONGIOVANNI: Everything here is-- whatever you ask me, check yes on all the questions. So yeah, I joined Google, long story short. And I can tell you more about that, but basically, the team that I was put in charge initially was with some of the people that started the Kubernetes Project. Like Daniel Smith, who I love a lot. And I learned so much in such a short period of time, like from the project and becoming a contributor and how it works.

And then I started to understand the broader community. And I met David Eads and Dims and many, many others. So it's been really, really a fantastic journey.

KASLIN FIELDS: So when you came into Google, did you come in as an engineering manager working with open-source Kubernetes? Or did you come in as an individual contributor?

FEDERICO BONGIOVANNI: I came in as an engineering manager, yes.

KASLIN FIELDS: I feel like that's a very unusual story. A lot of the folks that we talked to who are involved with open-source Kubernetes are usually individual contributors, who their day job is like writing code or something along those lines. But one thing I learned more and more as I get involved with the Kubernetes project is that there's so much management involved with all of it. So how do your management skills translate to your open-source work? Do you use them a lot?

FEDERICO BONGIOVANNI: Yes, they do. And I think there is two parts of the story. At Google with the team-- but it's not disconnected from the open source community because many people on my team up to these days, they mostly work on open source. That is their job. And I think it's fantastic that we were able to keep it that way.

I will tell you this. This is kind of confession, funny story. So I joined this team. And Google is a very bottom-up company. Nobody tells you what to do, so figure out your job. We hire you.

KASLIN FIELDS: Yep.

FEDERICO BONGIOVANNI: So I start to meet with these people, and, like, what do I have to do here? Like, they are so smart, they know everything. Imagine, I don't know, Daniel Smith. He wrote the part of the first lines of code of Kubernetes.

And so, I spent some time getting familiar with what was going on. And what I realized is that they didn't need another person to code. They were probably much better than me at coding. They didn't-- another manager to do code reviews. What they needed was a little bit of organization and priorities. That was one.

And the other big one was-- I think they were doing fantastic things, but please, nobody hates me when they hear this, but a lot of people around them didn't know what we were doing. Even at Google, we were doing all these fantastic features, and enhancements, and extensions, et cetera, so much technical work that there was not enough investment in building more awareness. And connecting with opportunities-- so we are building this exactly for this use case.

So I started to be that person, using, as you said, my management skills to organize a little bit the work and the priorities, make sure that it's sustainable, like people don't get overburdened and things like that. But at the same time, connecting with the right opportunities with other teams, other products, and other companies. So I think that is what I did to apply some of my different skills, not just coding and reviewing.

KASLIN FIELDS: Yes. This is something that we talk about in open source a lot, is that we never have enough individual contributor-type contributors, of course, who can write code and understand the code and make improvements to Kubernetes as a project, but something that we're really lacking in open-source Kubernetes and that's really hard to attract to the project is folks with those types of management skills because we have to have those.

As any kind of product that you've ever worked on in any company, you have to have those folks who are defining what those new features are going to look like, who they're for, helping to communicate to those people what they're for, and generally organizing the work. We need to create this and then this and this and then this. And who's going to do those pieces?

And so, people end up doing that in open-source Kubernetes even if they really prefer more of the heads-down coding-type work. And so, we have a lot of conversations about how can we encourage people to use those skills and build those skills, and also encourage people to come to the project who have management-type skills? So I'm so excited to hear that that works out for you.

FEDERICO BONGIOVANNI: Yeah. And another question that probably you will not ask me, but I want to say, it felt super natural and very welcoming for me to bring this-- like, I'm a co-chair of the SIG. David is very different from me. He's much more technical. Like, there are many things that he can do that probably I can't do.

But we divide and conquer. And it doesn't feel forced or anything like that. So that is what I wanted to say. It was very welcoming, the feeling I have. Like, I never heard anything about like, oh, you should code more, et cetera. It was clear that I could contribute in other ways.

KASLIN FIELDS: And I think that's a theme that I've noticed in open source as well, is that we want folks to do the things that they're excited about, and that they care about, and that they have the skills to do and things that they're comfortable doing. And a lot of folks in open source are very Type A, we might say. They love to pick up work, they love the project. And so, they'll just pick up anything.

But we try to implement some checks and balances with each other and say, hey, no, you're really good at that and we need that done, so could you please do that?

FEDERICO BONGIOVANNI: Yeah, I agree. I agree.

KASLIN FIELDS: Yeah. So you've been involved with the project for six years.

FEDERICO BONGIOVANNI: Yeah.

KASLIN FIELDS: Doing a lot of these management-type things in SIG API Machinery. And you have, of course, a technical background as well, so I'm sure that you're deeply involved with a lot of the technical decisions that go into the project. So are there any particular early design decisions that you were involved with that have aged interestingly?

FEDERICO BONGIOVANNI: Yes. But I will say it's been evolving over the time. And when I started to be more involved in the open source community of Kubernetes, initially I came from a very top-down company. So I was like, OK, the governance part must be SIG Architecture, so I started to go to all the SIG Architecture meetings.

And then, I was like, well, it doesn't sound like they are vetoing or approving like everything. There is a lot of autonomy in each SIG. So I'm trying to say, I learned also that the Kubernetes community works more organically, and there is-- I think there is a bar in general, and SIG Architecture provides the guidelines and a lot of the things that are horizontal to the project. But then every SIG has a lot of autonomy, and there is a lot of trust put in every SIG to make the right decisions for the project.

Which is why anecdotally, or between parentheses, it's really hard to-- in a good way, to get to a place where you have the "power," air quotes, power of approving broader changes because if you approve the wrong thing, it can have horrible consequences for the community.

What was I involved? So I was involved like large features. I was involved in Server-Side Applied largely. I was involved when we GAed CRDs in 1.16. I remember that one.

And lately, we've been working on a lot of features. Also, Kubernetes has been becoming more a mature project over the years, I would say. So the way that we made decisions or technical designs in Kubernetes 1.05 is not the way that we are doing them in 1.30. The needs are different. The customer base of the project is different, et cetera.

And I will confess, at the beginning, I was super hesitant. I'm a shy person by nature, so I was like, what can I say in this conversation between all these amazing giants of the project that-- can I add anything?

But then I realized that I did have a lot of background and experience running big systems and all of that, so I started to participate more. And again, I was always super welcome. You know that saying, there is no dumb questions? I think it's true.

And even if you ask a dumb question, nobody's going to make you feel like you are. If you're asking, it's because you don't know. So in my experience, every time I cannot recall a single time that didn't happen, at least in public meetings, somebody's going to take the time of 30 seconds explaining to you why what you're asking was already addressed or why it doesn't apply to this.

So there were two things. One is scalability. I remember when we were discussing some of these features, there is always the-- maybe because of working at Google, but maybe my own background. Like the consideration about how does this scale when the cluster gets bigger and things like that.

But the one I remember most recently is-- in SIG API Machinery, we are driving this initiative to adopt CEL, which is a technology that stands for Common Expression Language. So CEL is super powerful for doing things like validations for CRDs. For example, instead of having to build a webhook to be able to validate some fields or things that you want to enforce, you can just on your deployment YAML, express some basic validation that you want to have. This is a string and cannot be empty. So now it simplifies a lot, et cetera.

So it's a big change. We've been working on this for the last two years, I would say, [? Joe ?] [? Beds ?] and Cici Huang from my team and others. And there was a lot of discussion about customer feedback. How much is enough from this? Because we want to make technical decisions, but there is so much to do that it comes down to what is more important than what? And is this something that is going down the direction of what customers will want to use? Are we building the thing that is going to allow them to get rid of admission webhooks?

And then, that ties into the way that Kubernetes works. Alpha 2, Beta 1, we wait for how many releases until we can graduate. Do we have enough feedback? Was this enough?

So I've been trying to help on that front. And I think we've been changing some of the priorities of what comes next, how much do we do for next release. Is this ready to graduate or we need to collect more information, et cetera? That is the reason why.

KASLIN FIELDS: Yeah. Just like with any business, when you create a product, you have a limited number of resources and a limited number of people and limited amount of expertise to decide what you're going to work on. There's a million things that customers want and you can only do some of them.

FEDERICO BONGIOVANNI: Exactly.

KASLIN FIELDS: So always important.

FEDERICO BONGIOVANNI: And as you said, a lot of alpha people that want to do everything, but we can't do everything. The day has only 24 hours, and there is only so many of these that you can dedicate to make changes in the code before code freeze, et cetera. So let's make sure that we are aligning the priorities in a meaningful way.

KASLIN FIELDS: That is such an important part of the project, and thank you so much, Federico, for talking with me today about it. And I hope folks out there have learned something today, that these skills are very useful in open-source Kubernetes. Maybe encourage your managers to get involved.

FEDERICO BONGIOVANNI: Yeah, to get more involved. We need more help all the time. Everybody is welcome.

KASLIN FIELDS: Wonderful.

FEDERICO BONGIOVANNI: And if they don't know what to do-- sorry to interrupt. Just come to me. FederBongio is my Slack handle. I have at least longer than this screen of things that you could be helping me with.

KASLIN FIELDS: Perfect. So definitely make sure that you reach out to Federico. Any other last tips for folks listening who might want to get involved?

FEDERICO BONGIOVANNI: I will say, it sounds like a cliche. The way of Kubernetes, because it's open source, is chop wood and carry water. So that is how you do it. You start small, you start getting involved, you earn the respect and your name slowly. And as long as you are consistent and you are willing to dedicate the time and effort, you're going to make progress for sure.

KASLIN FIELDS: We hope to see you in the project someday, listeners out there.

FEDERICO BONGIOVANNI: Totally.

KASLIN FIELDS: Thank you so much, Federico.

FEDERICO BONGIOVANNI: Thank you for having me. It's been a pleasure.

KASLIN FIELDS: I'm so excited that we got to talk to these amazing members of the contributor community. Dims, of course, recently submitted that one point-- over 1 million lines of code removed from the Kubernetes project, which we talked about a bit. David Eads and Federico are folks that people may not have heard of before. They're deeper in the project, kind of, but I love their perspective, David being co-chair of a couple of different SIGs, and Federico being an engineering manager talking about the different skills, really, that it takes to run a project of this size.

ABDEL SGHIOUAR: Yeah. It's pretty interesting to-- first of all, like talking to these three, you can already see the amount of maturity and they've been around for a very long time. So it's a different crowd, I would say. I think in the two years we've been doing the podcast, I've never heard somebody talk that way. It was pretty interesting.

And I guess it's just because we've been interviewing quite a lot of new people and people doing all sorts of interesting things, and we didn't really talk to the maintainers of the original maintainers because they have been on the show in the past.

KASLIN FIELDS: It's also different now. It's the 10-year anniversary. So looking back has a different feel to it.

ABDEL SGHIOUAR: Of course, of course. Of course, yeah, yeah. Yeah. I remember the first time I saw Dims and Tim was the Kubernetes maintainers read mean comments, the one they do each time at KubeCon? That's a lot of fun, actually.

KASLIN FIELDS: KubeCon EU this year, actually, Dims and Tim did this talk called Kubernetes Maintainers Read Mean Tweets I think is how they phrased it? Right.

ABDEL SGHIOUAR: --Comments? Something like that.

KASLIN FIELDS: It was really just mean comments, yeah, mostly from GitHub. And I loved the way that they framed it. It was like, we're going to teach you a little bit about how you can communicate with the contributor community through counterexamples.

ABDEL SGHIOUAR: Yes. Through things you shouldn't do and say.

KASLIN FIELDS: Yeah. And that room was packed. People could not get in. It was standing room only. And they stopped letting people in. So it wasn't really standing room only because I don't think they let people in to stand.

ABDEL SGHIOUAR: But it was packed.

KASLIN FIELDS: Fantastic talk.

ABDEL SGHIOUAR: Yeah. The talk is on YouTube. We'll make sure to have a link for it.

KASLIN FIELDS: Yep. 10 years of reading mean comments.

ABDEL SGHIOUAR: Yes. But it's cool. Actually, I think I enjoyed a lot the talk with Federico, even if the interview was short, but being able to still contribute from a management point of view, it's quite interesting because, again, each time we talk about open source, people just think code, but you also need all sorts of personalities and profiles to run a project like Kubernetes.

KASLIN FIELDS: My favorite quote from the book "Working in Public" by Nadia Eghbal, which is about open source work, is people don't not join open source because they don't have the technical skills. They don't join it because they're afraid of committing a social faux pas. And I feel like the world of all of the management and stuff that goes into the project that people don't really think about is one of those things that is easy for folks to miss from the outside.

ABDEL SGHIOUAR: Yeah. Yeah.

KASLIN FIELDS: So you come in and suddenly all of these people are doing management stuff all around you and you're like, wait a second, what am I doing?

ABDEL SGHIOUAR: And that's essentially what Federico was saying. It was like, yeah, what am I supposed to do here? What's going on?

KASLIN FIELDS: Mm-hmm. I feel like there's some really good stuff in that book also that will speak to how important all of the management pieces are that go into open source. I feel like that was my first look at it before I really dove into it in open-source Kubernetes.

ABDEL SGHIOUAR: Yeah. I think I haven't read it yet. I should read it. We'll make sure that there is a link also in the notes.

KASLIN FIELDS: Yeah.

ABDEL SGHIOUAR: Cool. And then, yeah, Dims was-- we discussed a little bit the 1.1 million-plus line of code removal.

KASLIN FIELDS: So exciting and terrifying.

ABDEL SGHIOUAR: So exciting. Yeah. The best way you can contribute to open source is remove code.

KASLIN FIELDS: That is actually-- we had huge discussions about that at KubeCon EU at the Contributor Summit because, I mean, Kubernetes is so big now, and in Tim's keynote about the 10-year anniversary, he was saying one of the big things that the project needs to do over the next decade is streamline. It's going to be more important than ever to keep the project in a good line.

Not letting it get so terribly complex and branch out so much that it just becomes completely unwieldy and people lose focus on it. Which-- it's already a huge community and a huge ecosystem, but managing that is where the next decade of work is going to have to focus.

ABDEL SGHIOUAR: Yeah. Yeah, yeah. I think the interview with David was really cool, especially-- OK, version 1.0 of Kubernetes did not have namespaces and did not have whatever other features. It's like, OK.

KASLIN FIELDS: I loved this look-back so much. It's so cool hearing about the early days from folks who are still involved. It's like, wow, this has really changed a lot.

ABDEL SGHIOUAR: Yes.

KASLIN FIELDS: And from the early days when there were no SIGs to this?

ABDEL SGHIOUAR: Yeah, of course, yeah. And the community was like, whoever has time do stuff, right?

KASLIN FIELDS: Mm-hmm. Yep. That was such an interesting perspective to hear from all of these folks, is that when they joined the community, it was just like, things were happening. And there were some of these areas that we were like, we know we need to figure this out, and you just kind of dove into it.

Whereas these days, I feel like when you start contributing, it's a lot more intimidating in a way because there's so much more structure in place. There's all of the special interest groups, and half of them have names that are not completely obvious when you first read them. Once you dive in, they usually make sense, but which SIG should I go to? How does all of this work? There's some kind of leadership thing going on here, and there was none of that in the beginning.

ABDEL SGHIOUAR: Yeah. And also, the fact that there are some SIGs that are-- like, for example, what Dims mentioned in the interview, there are some SIGs that require some more stability, like the test infrastructure and also SIG Infrastructure. So that's all the GCP, AWS setup where you build the infrastructure, and test infrastructure, and release infrastructure. And it's harder to get into, and then once you are in it, you need to commit a minimum amount of time.

But on the other hand, actually, the conversation-- because I've been wanting to actually get involved in one of the SIGs, and I think those SIGs makes a lot of sense for me because it's more on the infrastructure side of things rather than coding for Kubernetes itself. So it was an eye-opening conversation.

KASLIN FIELDS: Yeah. Setting up those clusters--

ABDEL SGHIOUAR: Yeah. Yeah, yeah.

KASLIN FIELDS: Setting up the clusters to test each release and stuff is huge.

ABDEL SGHIOUAR: Yes. And setting up the serving infrastructure, like serving the images. And all the stuff that goes into the project that is not part of Kubernetes codebase itself, everything around it.

KASLIN FIELDS: And I love that you all talked about the budget.

ABDEL SGHIOUAR: Oh, yes.

KASLIN FIELDS: Such an interesting topic in Kubernetes because we spend hundreds of millions of dollars a year to provide pre-built artifacts to users. And as Dims was saying, people kind of expect that now, but it costs a lot of money to do that. People don't realize. And there have been a couple of times where the project has nearly run out of money because of serving those artifacts.

And so, we've had to do some-- we've had to go to the cloud providers and say, hey, we really need some more money to be able to run this stuff so that we can continue and work out new deals with the cloud providers. Dims was mentioning AWS giving the community a whole bunch of credits that we've been using lately. Of course, Google Cloud still provides a whole lot of credits to the community for all of these activities like testing releases, delivering releases, the artifacts, and all of that stuff. Diving into the budget is fascinating.

And we actually have a project going on in open source right now where we're working on a blog post about not really the budget breakdown, but, like, I guess kind of the budget breakdown? Like how we use all of that money and why it takes so much and all of that stuff. So look out for that blog post in the future. I would include a link, but it's not there yet.

ABDEL SGHIOUAR: Yeah, that would be cool to read. I think I won't probably-- we can discuss this later, but I think I want probably to have Dims in a separate episode where we talk about specifically this expectation in open source. Like, people just expect certain stuff and I'm like, as long as you release code, you're doing open source. You don't have to provide an artifact, technically.

KASLIN FIELDS: Yeah. And see, this is going to be the problem with all of these 10-year anniversary episodes. After I was done talking with Fede. I was like, I just want to do a whole other episode with Fede.

ABDEL SGHIOUAR: There's so many opportunities for discussions.

KASLIN FIELDS: Yeah. And I let the David conversation go to about the length of one of our normal interviews even though it's a multi-interview episode because it was just so interesting. And that's just going to happen with all of these episodes, and they're all going to have multiple guests. So look forward to more content.

ABDEL SGHIOUAR: Yeah. Yeah, yeah. Cool. So what are your plans for 10 years? You're going to the Bay View party?

KASLIN FIELDS: Party. Yeah. I hope so. I've still got to get that travel approved.

ABDEL SGHIOUAR: All right.

KASLIN FIELDS: I'm not speaking there or anything, but I've been helping out with some of the planning around what activities are going to be happening there and stuff. So I'm hoping that I can be there on site to see how it goes and interact with the community and make sure that all of the activities are going smoothly. So yeah. I hope to be on site at the June 6 party. And by the way, we're planning to release an episode--

ABDEL SGHIOUAR: On June 6.

KASLIN FIELDS: On June 6, or at least very close to it.

ABDEL SGHIOUAR: Yes.

KASLIN FIELDS: Which is off our normal schedule, but we gotta celebrate the birthday.

ABDEL SGHIOUAR: Yes. We will have to. I think I'm going to have to do that because I think you're going to be traveling that day. I will be also traveling, but much later in the day. So if the episode is ready on time, I'll publish it.

KASLIN FIELDS: Yeah. You have quite the activities coming up for the 10-year anniversary as well, don't you?

ABDEL SGHIOUAR: Yes! Actually, on the 6th of June, I'm going to a meetup in Bergen in Norway. This is a city in the west part of Norway, the west coast, and it's one of my favorite parts of Norway. It's just a really beautiful city.

So I'm going to be there for a meetup with the local community. And then, on the 10th, which is the week after, I'm going to be in Aarhus, in Denmark-- different city, Denmark, talking about Gateway API. Different meetup, but also celebrating the 10-year anniversary. And now I don't remember which one, but there will be a cake at one of them.

KASLIN FIELDS: So don't forget. No matter where you are in the world, look into what KuberTENes parties or meetups or events are happening near you. There are so many of them.

ABDEL SGHIOUAR: There are lots.

KASLIN FIELDS: It's so exciting. There are so many active communities.

ABDEL SGHIOUAR: Yeah, yeah. When we were preparing for this episode, I was looking at the list, I was like, that's a lot. It's like 50-plus events. So make sure to hit one of them.

KASLIN FIELDS: Awesome I'm looking forward to all of the 10-year stuff. Please post stuff on social media so that we can see it. Tell us about what you're doing. We'd love to hear from you all about what you're doing to celebrate 10 years of Kubernetes.

ABDEL SGHIOUAR: Yes, please. I think if you post on X, just feel free to tag us. We'll make sure we retweet your whatever post.

KASLIN FIELDS: On LinkedIn.

ABDEL SGHIOUAR: Or LinkedIn. One of them. So all right. Cool.

KASLIN FIELDS: We'll see you on the internet.

ABDEL SGHIOUAR: We'll see you on the internet. Thank you, Kaslin.

KASLIN FIELDS: Thank you.

That brings us to the end of another episode. If you enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media at @KubernetesPod, or reach us by email at <kubernetespodcast@google.com>. You can also check out the website at kubernetespodcast.com where you'll find transcripts, show nodes, and links to subscribe.

Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time. By the way, we do that end bit every time. You're welcome.

[LAUGHS]

[MUSIC PLAYING]