#251 April 24, 2025
Nina Polshakova is a software engineer at Solo.io, where she’s worked on Istio and API Gateway projects. She’s been part of the Kubernetes release team since v1.27 and is currently serving as the Release Lead for v1.33.
Do you have something cool to share? Some questions? Let us know:
KASLIN FIELDS: Welcome to our Kubernetes 1.33 release episode. I am here with NINA POLSHAKOVA, who is a software engineer at solo.io.
NINA POLSHAKOVA: You can say either. I sometimes say solo when I introduce myself because the .io kind of trips you up.
KASLIN FIELDS: Welcome to our Kubernetes 1.33 release episode. I am excited today to be speaking with the release lead, NINA POLSHAKOVA, who is a software engineer at solo.io where she's worked on Istio and API gateway projects. She's been part of the Kubernetes release team since 1.27 and is currently serving as the release lead for 1.33. Welcome Nina.
NINA POLSHAKOVA: Hey, thanks for having me.
KASLIN FIELDS: Thank you very much for the bio. I feel like you went over a good amount of your history there, but is there anything else that you want to elaborate on how you got to where you are now?
NINA POLSHAKOVA: I think when I first joined Solo, because there are open source projects that Solo has that you can go on GitHub and see in the repo what people have contributed to. It was definitely one of the selling points. Before I joined Solo, I hadn't done much open source, but it was an opportunity to work in the open source space, contribute to code that other people could see, which is scary. But I remember my first issue was a community issue. Somebody was asking to add support to our API gateway for the Istio integration to work with Istio revisions. I remember we'd talked about it offline with my team. It was my first time responding to a GitHub issue. I had to tell him that we couldn't support revisions, but we could expose this other field and hopefully, that was OK. I was so worried that for some reason I'd get a negative response online, but it wasn't that scary. I think most open source communities are very welcoming and encourage collaboration. I've only had positive experiences commenting on GitHub issues. Even as a member of the Kubernetes release team, sometimes you have to tell people that your enhancement isn't going to make the cut and ask them to please file an exception. Even there, people are always very welcoming and understanding. I think it's a good community to be a part of because people are nice online. It's not really social media. It's a project we're all working together to build and make better.
KASLIN FIELDS: One of my favorite comments that I always love to pull out is from the book Working in Public by Nadia Eghbal where she says that the thing that keeps most people from working in open source is not actually technical competency, it's the fear of committing a social faux pas. People usually are pretty nice in these communities.
NINA POLSHAKOVA: That was definitely my biggest fear. I was talking to my manager; I sent him the message before that I was going to send on the GitHub. And he said, "Yeah, that sounds fine. Just post it." But once you get over that initial hurdle, I think it's a very welcoming space. And you get to work on cool stuff.
KASLIN FIELDS: Granted, there are exceptions, and if you want to be amused by them, Tim Hockin and Davanam Dimms, Dimms is what we call him. Dimms and Tim Hockin did a talk at KubeCon a couple of years ago where Kubernetes contributors, Kubernetes maintainers read mean tweets. Very good.
NINA POLSHAKOVA: I think that on GitHub there's a professional level of communication. Yes, you can downvote someone's comment, but it's not as mean as Twitter per se. It can be spicy sometimes.
KASLIN FIELDS: This is true. They have some GitHub comments in there too that are a little spicy. Not super spicy. Let's bring it back to Kubernetes 1.33. As the release lead, you are very familiar, of course, with what's going on with this release of Kubernetes. Doing three releases a year is a lot. Let's help folks keep up with what's going on. What have we got going on in 1.33?
NINA POLSHAKOVA: I think this is a pretty exciting release because we have 64 enhancements going in, which is a pretty big jump from previous releases where in 1.32 we had 44, which is a pretty good number too, but jumping to 64 is definitely a significantly large release. We have a lot of exciting features moving to stable in this release, sidecars, which has been a very long-awaited one. Native sidecar support is now stable. And then the other one that people mention is multiple service CIDR support. A lot of cool stuff going into stable in this release. But we also have some fun, exciting features that are more hot topic features like dynamic resource allocation has six new features in this release, all related to DRA. It's a good mix of both stability and new and exciting things coming on the horizon.
KASLIN FIELDS: Let's dive into some of these. There are a bunch of blogs that come out around the release. So if you want to dive into details on these things, definitely check out the blogs. I've got a list here from the highlights blog that came out before the release. And there's one deprecation mentioned, which is the Endpoints API, which was stable, is being replaced by the Endpoint Slices API, which I believe makes some improvements to the way that endpoints are used. The endpoints API was a bit overly simple.
NINA POLSHAKOVA: Exactly. And one thing to call out is deprecated means marked for removal. Features will continue to function until they're removed. In Kubernetes, that's at least one year. Anything that's deprecated is still going to function. The Endpoints API specifically is getting deprecated in favor of endpoint slices to make sure it's possible to run clusters without the Endpoints controller. Endpoint slices have effectively replaced endpoints since 1.21. And even several new service features like dual stack and topology are implemented only for endpoint slices, not endpoints. The direction of the community is also moving in the direction of endpoint slices, not developing as many new features on top of endpoints. Another thing to call out is kube-proxy doesn't even use endpoints anymore. The Kubernetes Gateway API conformance tests are also using endpoint slices. In general, the community in Kubernetes is moving on to endpoint slices. What this KEP means is it's mostly about just documentation and tests. It's not deleting or modifying the endpoints API. That's not a goal of the KEP; it's explicitly listed as a non-goal. It will improve the intent tests and documentation to show that Kubernetes is moving towards a world where most users run Kubernetes with endpoints and endpoint slice mirroring controllers disabled and more in line with the direction the community is going towards.
KASLIN FIELDS: Very cool. Thank you for that additional context and for explaining the deprecations removals bit. Always something good to bring up. I also want to mention that in the highlights blog post, there's a really nice description at the top about what the term means, about the rules of how deprecations work for different levels of features. If it was an alpha feature versus a beta feature, the rules for how the deprecation works are apparently a little bit different. I thought it was a really good explanation. And so that's the one deprecation that was mentioned in the highlights. But there are three removals that I want to call out. Two of them were in the blog post, then we added one more. There's the removal of kube proxy version information in node status, removal of host network support for Windows pods, and the Git repo volume removal. Let's start off with the kube proxy one.
NINA POLSHAKOVA: This is a field for nodes which is removed in 1.33. The field wasn't accurate because it's set by Kubelet, which doesn't know what KubeProxy version, or even if KubeProxy is running. It's not a very useful field and it was marked as deprecated before and now it's getting removed in 1.33.
KASLIN FIELDS: I've heard people talk about before how that field is inconsistent anyway, so probably shouldn't be using it if you are. The next one is host network support for Windows pods. This one was interesting. The implementation faced unexpected containerD behaviors which plagued its usefulness and apparently alternative solutions were available. Removing host network support for native Windows pods and Kubernetes.
NINA POLSHAKOVA: Exactly. It aimed to achieve feature parity with Linux and provide support there. The original implementation landed in alpha in 1.26, but it faced some challenges. SIG Windows decided to withdraw support for it. And in 1.33, it's getting removed. One thing to call out, because there was a question in the SIG Windows Slack channel, this doesn't mean that it's affecting the host process containers. Host process containers are a special type of Windows container that run directly on the host. This KEP does not remove that. It's just aimed at providing the host networking. It again was never stable just because of the issues it faced. SIG Windows decided to remove it.
KASLIN FIELDS: It sounds like it was always unstable, so it makes sense to remove it. And the last one we were going to talk about is the Git repo volume. This is something that's been deprecated apparently for seven years and there were security concerns. This involves in-tree driver code because it's about Git repos. So it makes sense to be removing it.
NINA POLSHAKOVA: It's been marked as deprecated since 1.11, so seven years. It's been a long time coming. Since it's been marked deprecated, there have been security concerns because your Git repo volume can be exploited in some ways to get remote code execution as root. So not ideal. And in 1.33, the in-tree driver code support is getting removed.
KASLIN FIELDS: Generally in-tree things that are associated with a vendor or third party system outside of Kubernetes itself, we're generally moving in the direction of removing those things. So follows the trend. And so that's the deprecations and removals to be highlighted. Beyond that, feature improvements, new things that folks can look forward to in 1.33. We've got quite a few. You've got 64 enhancements. So there's certainly more than we'll point out, but we've got a few on our list to go over. Starting off, support for user namespaces within Linux pods. This was one I hadn't heard about. Apparently it was alpha in 1.25, beta in 1.30, and 1.33 makes it enabled by default. What does this one do?
NINA POLSHAKOVA: This has been years in the making. The KEP number is 127. If you look at other KEP numbers, they're in the 5,000 ranges. This is a very early KEP. It's been open since 2016. Implementing it, one of the reasons that it took a while for this KEP to mature is that it required a lot of changes across different projects, not just Kubernetes. In the KEP details, it highlights that they've had to have changes in Kubernetes, Containerd, CRI-O, RunC, CRUN, and even the kernel to make it happen. It's a very exciting security feature in Kubernetes because it allows developers to isolate user IDs inside containers from those on the host. That is great if you want to reduce the attack surface if your container gets compromised. This is specifically a big win for multi-tenant Kubernetes systems, where you have shared clusters, different teams and different organizations deploying workloads. Because if one of your workloads gets compromised from one tenant, it doesn't potentially affect the other tenants or the host system. It aligns nicely with that principle of least privilege that is very important for security. It's, as you mentioned, still in beta in 1.33, but it's now on by default. You can try it out in 1.33 and see what you think.
KASLIN FIELDS: Some of the Linux sysadmins out there might be excited about this one. Folks who have been looking into Kubernetes for multi-tenant use cases for a long time. I love that call out for the use case. Makes a lot of sense to be able to separate the user IDs on the system so that if you were to escape the container, you could maybe shut down those user IDs and it wouldn't affect the rest of the system more, I would imagine. I'm very curious about the details on that one now. Another feature improvement in 1.33 that I want to go over is in-place resource resize for vertical scaling of pods. This is something that I've been talking with folks a lot. In-place VPA, in-place vertical scaling, in-place resize. The ability to change the resource allocations associated with your pod without restarting the pod is very exciting. This has been in alpha since 1.27, and it's beta in 1.33.
NINA POLSHAKOVA: And it's another oldie. It's been open, I think, since 2019. Very old feature that has been long awaited again. It is going to beta in 1.33. It allows changes to the resource allocation. Before, when you had to change CPU and memory requests, you had to restart the pod. But now you can do it as the name implies, in place. This is great for stateful workloads like databases, ML training jobs, inference servers, because you can't disrupt them while they're running. But you might need dynamic resource tuning based on usage. Anything that can't be scaled horizontally or disrupted in execution can now benefit from this feature.
KASLIN FIELDS: Which is huge with AI workloads, which are so resource hungry. This makes it a lot easier to run your systems efficiently while giving those AI workloads the resources that they need when they need them. I've talked to a lot of folks about vertical pod auto scaling and how most folks don't use it because it'll restart your pod if you have it in the mode where it'll do that. There's another mode where the vertical pod auto scaling will just tell you how you should set your resource requests and limits so that you can do it yourself and manage the disruption that way. But being able to do it in place is going to unlock a lot of potential, I think, especially for those resource hungry workloads. Maybe Java too. I've heard some interesting rumblings about Java workloads in this. Moving on, another very AI related area, dynamic resource allocation. DRA, you said, has a bunch of new features in this release, right?
NINA POLSHAKOVA: We have a section called DRA Galore in our blog, because there are so many. A lot of them are relatively small dynamic resource allocation improvements. Dynamic resource allocation is the new API in Kubernetes to set the requesting and sharing of resources between your pod. For things like GPUs, TPUs, FPGAs, you can adjust the requests for those resources. Third party resource drivers are usually responsible for tracking and preparing those resources. But the allocation of the resources is handled by Kubernetes with structured parameters. This was something that was added in 1.30. In 1.33, there's a bunch of small improvements that make user experience better. There is the additional support for partition devices. There's DRA device taints, so very similar to node taints, but your cluster admin can now taint devices to limit their usage. DRA prioritized list, defines how a request can be satisfied in different ways. As you can see, it's a lot of improvements to improve the user experience using DRA and fill in some of the gaps that maybe weren't there in 1.30, but now are things that people need using these features.
KASLIN FIELDS: We had an episode earlier this year from this podcast where we talked about working group device management, which is doing a lot of work on dynamic resource allocation. It was very interesting to hear some of the history of it in that episode as well. With any big technical architectural adventure, there are going to be challenges. I know there were some debates about how to do various different things for the DRA. Seeing a bunch of those come to fruition in 1.33 is really exciting.
NINA POLSHAKOVA: I know there's a classic DRA implementation. I think it was rolled back, and now this is the new version. DRA has had a long history in Kubernetes in different forms, but the new form now has more features for you to play around with. That's exciting.
KASLIN FIELDS: Something I always think about is when we talked with folks about the Gateway API, one thing we always mention is that it's the new version of how you manage ingress traffic in your Kubernetes clusters rather than the original ingress API that came out with Kubernetes because the original ingress API was very theoretical. It was, here's what we think people are going to need. But now, since we have a better idea and we have all these use cases and users to learn from, we have a better idea of what people actually need. So we've created the Gateway API. I feel like dynamic resource allocation is in one of those spots where we had an implementation of it, but now the use cases are real. Very real and very prominent. That's driving this innovation of figuring out how to make it work better for what people need it for.
NINA POLSHAKOVA: Definitely, I think it's always easier to design when you have a specific real world use case in mind. Then you can meet users where they are, not designing for a theoretical user like, it would be nice to configure this. We should just make it configurable for whatever users need. You either limit the scope too much or it's too open-ended, so people don't know what the best practices are. I think it takes a couple iterations to get to a place where it's something that people will use and make sense for modern use cases.
KASLIN FIELDS: I hope that's something folks can get out of these release episodes that we do is we get to see Kubernetes changing over time. We've always got the release lead on, but also Abdul and I have done a bunch of these release interviews. We have the context of how these things have changed over time. I hope that we go over use cases that help make these things more attainable for folks rather than just reading the release notes, which are great and contain all of the technical information but it's hard to put it into context without a little bit more pizzazz.
NINA POLSHAKOVA: This release specifically builds up on the previous couple of releases very well, because you see a lot of the usual suspects reappear: sidecars graduating to stable, dynamic resource allocation again has been highlighted - 1.31 highlighted it, and now there's more features that build on top of it. Every release is building on top of the other releases. It's not a zero sum game. It's the project having more and more things build on top of it in the ecosystem. It's not like one release owns a specific feature.
KASLIN FIELDS: When you have three releases a year, the release cycle never stops. Folks are often working on features that don't make it into the release because they're not done in time or various reasons of reviewing or something's not there. They end up in the next release. It's always rolling.
KASLIN FIELDS: Coming back to the feature improvements in 1.33, there are a couple more we wanted to go over. One is ordered namespace deletion. I love it when I see the word ordered in any new feature in Kubernetes. That's something I hear from users a lot is that they want more control over the order in which Kubernetes does things. Ordered namespace deletion in this case.
NINA POLSHAKOVA: This one I was surprised didn't exist already in Kubernetes. It didn't exist, but it's going to directly into beta in 1.33 and is getting cherry picked all the way back to 1.30 for good reason. Ordered namespace deletion introduces deletion priority for your namespace. Currently the deletion order is semi-random which can result in not great behavior. If you have a network policy deleted first before your pod, that's a security gap. You want to make sure that the pod gets deleted first and cleaned up and then all resources based on the logic and security dependencies of that pod are cleaned up after. There are no gaps in time where you don't have a network policy in place but the pod exists.
KASLIN FIELDS: You don't have access to it, but the pod is there running whatever was on it. That wouldn't be good. Very exciting to see that one getting added in. The last one that I wanted to go over was enhancements for index job management. Looks like there are a couple of different things going into this one, and it's graduating to GA.
KASLIN FIELDS: It's a couple of different things with jobs. The couple things mentioned in the highlights were per index back off limits for index jobs and then, define conditions for marking an index job as successfully completed when not all indexes have succeeded. There are a lot of interesting cases with jobs determining whether they're done or not, that I think become even more important in an AI-based world. I've seen some interesting talks on that. Good to see some more features going into jobs to make them more robust.
NINA POLSHAKOVA: The job success policy was another one that surprised me Kubernetes didn't support. Your PyTorch workloads, only a specific leader index determines if the job succeeds or not. The current behavior in Kubernetes is all index jobs have to succeed for the job to be marked as complete. That seems like a limitation because you might want a specific index, the leader, be marked as successful, or a specific count. Now you have that flexibility to either determine the success count, how many job indexes were successful to be considered a success, or which specific index must succeed in order to mark it as successful.
KASLIN FIELDS: That's all the feature improvements I had on my list to go over. I'm looking forward to seeing more of those release feature blogs and the release blog itself to get more in depth with all of these. Are there any other deprecations, removals or feature improvements that we didn't go over that you wanted to bring up?
NINA POLSHAKOVA: I think sidecars are another important one, especially since working on Istio is relevant to that space. Personally, when I joined the release team, I think sidecars became alpha in either 1.27 or 1.29, which was my first time either being on the release team or being the enhancements lead. I'm forgetting which cycle it was. I followed it all the way through. It's also related to my work, so it's a nice parallel in open source work and day job. The sidecars follow me around everywhere. Now they follow me around correctly. Natively, exactly. The sidecar is a common pattern in Kubernetes, often used by service meshes. Istio and Linkerd have this sidecar mode where you get your sidecar container injected next to the application container. That enables the service mesh to do all the observability, connectivity, and security functionality abstraction, because it abstracts it away from the main container and the sidecar handles all of that. Although it's a very common pattern, it wasn't natively supported. You couldn't coordinate the sidecar with the main container natively in Kubernetes until now. Now your sidecar, if you turn on this restart policy field, they guarantee you to start before and terminate after the main container. That reduces friction of sidecar adoption and improves the reliability of the sidecar's life cycle. I'm pretty excited about that just because I saw the enhancements through my career on the release team. I think it is very important for users who are using sidecars in the community.
KASLIN FIELDS: The reason it's so common and popular is that a sidecar is a container that exists alongside the container running your application in a pod in Kubernetes. Since pods are the single unit that Kubernetes manages, that means that both of those containers share networking, they share storage, they share any resource that Kubernetes manages for them. Since it's shared, that means they both have access to those things, which means you can do things that especially service meshes do: intercepting traffic and ensuring that it adheres to MTLS and all sorts of stuff you can do with sidecars, which makes them very dangerous as well as a security tool if you wanted to try and hack containers. Which is why it's important to have robust support for the pattern to make sure that you're creating those sidecars in a way that makes sense and managing them throughout the life cycle of the pod.
KASLIN FIELDS: We've gone over the features, removals, and deprecations in 1.33. Let's get to the fun part. I don't know, a lot of the listeners might debate me that that's the fun part, but I always enjoy hearing about the release theme. What have we got for 1.33?
NINA POLSHAKOVA: The theme for 1.33 is Octarine. If you're familiar with Terry Pratchett's Discworld series, Octarine is the color of magic. On a personal note, I love Terry Pratchett's Discworld series. He's one of my favorite fantasy authors of all time. It's a very long series, so similar to the release cycles in Kubernetes, there's a lot of different books in Discworld. I think this release specifically, there are a couple of things that reminded me of Discworld and Octarine as a theme. The first thing was KubeCon EU was set in London and Terry Pratchett is a British author. I think a lot of the conversation around KubeCon was on the magic of AI, AI observability, things like that. Everyone keeps using the word magic alongside AI, but I think Kubernetes is also pretty magical. It enables a lot of the magic that we see running workloads across different industries. Even nowadays we're running AI workloads. In one of his series about this teenage witch, it's called the Tiffany Aching series. The magic's not very flashy like Harry Potter. It's day to day average magic. One of his quotes is, "it's still magic even if you know how it's done." I think that's very applicable to Kubernetes where it might be infrastructure, it's just running things, but it is, if you take a step back, pretty magical that this open source project exists. It's run in so many different industries. There are so many contributors around the world contributing to it and every release it has more enhancements going into it. On a personal note, it fits in with my favorite author. But it also I think is a reflection of the Kubernetes community and Kubernetes magic that enables that community.
KASLIN FIELDS: I love that. I have been interested in the Discworld series for a long time, I've heard great things about it, and I've never read it, so maybe this is my sign.
NINA POLSHAKOVA: Everyone keeps asking for recommendations on what book to start. You can always start at the very beginning, but I feel Terry Pratchett, like Kubernetes, has also had evolutions. I think his early works are a little more rough than his later works. If you want a one off book to try instead of sitting down and reading the whole series, I think Guards! Guards! is a good one. The logo of 1.33 is based on that book. It has a tiny little dragon on the Ankh-Morpork wizard tower. That one's very fun. You can read it as a standalone. Another good one is Going Postal. If you like more satire works, it's a satirical view of the postal system in this magical world. If magic ran the postal service, how would it work? Those are the two that I would highlight as good starting points if you want an introduction to Terry Pratchett.
KASLIN FIELDS: I feel the post office one also might have some very interesting parallels to Kubernetes. Networking is at the core of all that Kubernetes does.
NINA POLSHAKOVA: A lot of message passing involves these CLACK towers that send signals. It fits the theme.
KASLIN FIELDS: Interesting. I've got new books on my list from this one, fantasy books, which is a genre that I love. We've talked about 1.33. We've talked about the release theme. Let's get back to you. You are the release lead for 1.33. You've been involved since 1.27. What was your path like becoming the release lead?
NINA POLSHAKOVA: The Kubernetes release team has this shadowing program that's great because you get introduced to different teams on the release team as a shadow and then you can decide if you want to switch teams or become the sub team lead. I hadn't had much experience with Kubernetes other than doing bumps in our Git repos. I remember being burnt by one deprecation where Kubernetes removed the cluster name from the cluster name label. I remember thinking, why was this removed? I don't get it. It was very tiny. That got me reading the release notes. Then I saw that there was this shadow program that you could apply to. I wanted to see how the sausage is made. How do people determine what gets in a release? How do deprecations get into a release? I filled out a Google form and wrote why I wanted to be a member of the release team. I'm sure I mentioned how cluster names scarred me, and now I wanted to make sure no one was scarred ever again. I was lucky enough to get chosen as a shadow on 1.27 for enhancements. I enjoyed the people I got to work with. I attended KubeCon that year in Amsterdam. I met a lot of people that were part of the release team. I think that community made me feel like I wanted to continue contributing. I came back for another shadow experience on enhancements and then eventually led it. Then jumped around to the release notes team, was a release lead shadow and then eventually became the release lead. I was very lucky this cycle because I knew most of the people that were either my release lead shadows or the sub team leads. I feel like I had a trusting relationship with everyone who was working with me. I felt comfortable enough to reach out to them if things weren't going so well or if I was concerned about a hiccup on the release path. I knew that the people I had with me would be able to step in and help me handle it. I think it started off as a Kubernetes bummer, I want to see how this is made, and turned into this is a great welcoming community and I want to be a part of it.
KASLIN FIELDS: I bet a lot of our listeners out there can relate to that. I love that you were burned and then got involved as a way to better understand that feeling and maybe address it for others. I'm sure a lot of folks listening out there have been burned by Kubernetes releases. Proposal, get involved.
NINA POLSHAKOVA: Exactly. I think we have a good shadow program. If you're interested in Kubernetes, if you use Kubernetes, definitely apply. There's a lot of ways to get involved that don't involve writing big enhancement design docs. You can improve documentation. You can help out as a release shadow on the comms team, helping reviewing blogs that get written. There's a lot of ways to get involved and learn what's happening in a new release without committing that much time to writing the code and the design docs that you might think are the main contributions. There's a lot of different ways you can help out.
KASLIN FIELDS: Make sure that you check out the release shadow application when that opens next time. We always try to post about it on social media. There's the Kubernetes developers Google group, which is the primary method that contributors use to communicate with each other. I don't know that we post anything about it on kubernetes.io, but we do definitely on social media and on the Google group.
NINA POLSHAKOVA: The GitHub issues and the Google group will definitely have links. We also try to post it in the Kubernetes Slack. In the SIG release Slack channel, usually we have a post to the Google forum where you can apply.
KASLIN FIELDS: Keep an eye on those things if you're interested in being part of the next release and trying to prevent that pain that you experienced for others. I want to wrap this up with a couple of last items. One is any advice for anyone who does take your advice and fill out that shadow form?
NINA POLSHAKOVA: I think the biggest advice is don't be afraid to ask questions and get involved. Because we're doing things a certain way doesn't mean we have to keep doing them that way. I think when I joined Enhancements, it was the first cycle where we moved away from the Google Sheets tracking doc to using a GitHub project board. That saved so many hours of work. It was great. Ideas like that come from shadows who join, take a look at the project and want to automate processes or make improvements and make other shadows' lives better. It's not the way that it always has to be, it's an evolving process and we're always open to improvements and asking questions is a good thing. That's how you learn.
KASLIN FIELDS: It's always wonderful to have fresh eyes on a process. Doing this three times a year, we've got lots of opportunities for feedback. Just need folks who are willing to give it.
NINA POLSHAKOVA: Another thing to call out is this release cycle we've consolidated the release notes team with the docs team into one team. There are only four teams in the release cycle, which is Enhancements, CI signal that tracks the state of CI, comms, and docs. But there are other ways that you can get involved. If you're interested in a specific SIG, you can always join that SIG's meetings, comment on the KEP, shadow other roles in the release process. There's branch management. There's a lot of different ways to get involved. Release Team is a great way to start, and it's how I personally started, and I love the people I get to work with. But that doesn't have to be the way that you get involved in the community. If there's a specific field in Kubernetes you're interested in, attend some meetings and get to know people.
KASLIN FIELDS: Or attend New Contributor Orientation, which we hold once a month on the third Tuesday of the month. We have a playlist of recordings. If you're worried about where to start, worried about committing a social faux pas, as many of us are when we join open source projects, make sure you check out New Contributor Orientation. We'll give you the guide on how the whole community works and how to find a place that works for you. I love that call out. To close things up, I always like to ask the question of the release lead. Upgrading can be very painful, as you know, from personal experience. Why should listeners upgrade to 1.33?
NINA POLSHAKOVA: I think we highlighted a lot of features and some that we didn't even get a chance to mention that you can read about in the blog. NF tables are here in stable, multi-service CIDR support is here. If you read the blog and are excited about some features, I think that's a good sign that maybe it's time to look into upgrading. I do think if I had read the KEP and the release notes, I would have found the cluster name issue back when I was doing that. Reading is important. Use Control-F, try to find anything that might affect you. But it is important to keep up with the latest features that we have. We have a lot of exciting things this release. There are 64 enhancements and they range from things that improve user experience, like the kubrc feature that we didn't get a chance to talk about but a cool feature going into alpha, things that are affecting stability, like improving stability, sidecar support that we talked about, and new exciting things like dynamic resource allocation. This release has it all. Check it out. Read the blog. Hopefully it inspires you to upgrade.
KASLIN FIELDS: And with that, thank you very much, Nina. I've really enjoyed learning about 1.33 from you.
NINA POLSHAKOVA: Thanks for having us.
KASLIN FIELDS: I've had the whole release team on through you. Vicariously. That's the job of the release lead, right?
NINA POLSHAKOVA: Yes, channeling the team. You're the avatar of the next release.
KASLIN FIELDS: And I feel this fits into the fantasy theme as well. I'm sure there's some of that in Discworld, right?
NINA POLSHAKOVA: I don't think there's any avatar references. It's in the genre. Thanks for having me.
KASLIN FIELDS: That's okay, it's in the genre. But thank you very much.