#248 March 11, 2025

Kubernetes Ingress & Gateway API Updates, with Lior Lieberman

Hosts: Abdel Sghiouar, Kaslin Fields

Lior is a software engineer lead at Google Cloud focusing on GCE, Kubernetes, and Service Mesh. He is a leading contributor to Gateway API and the maintainer of Ingress2gateway.

Do you have something cool to share? Some questions? Let us know:

News of the week

MOFI RAHMAN: Hi, and welcome to the "Kubernetes Podcast' from Google I'm your host, Mofi Rahman.

KASLIN FIELDS: And I'm Kaslin Fields.

[MUSIC PLAYING]

MOFI RAHMAN: This week, we spoke to Lior Lieberman. The main goal of the conversation was to learn about industrial Gateway tool. But we touch on all things Gateway API and the future of it. But first, let's get to the news.

KASLIN FIELDS: A new nftables mode for kube-proxy was introduced as an alpha feature in Kubernetes 1.29. Currently in beta, it is expected to be GA as of 1.33. The new mode fixes long-standing performance problems with the IP Tables mode. In Kubernetes 1.31 and later, you can pass a flag to kube-proxy to enable the feature. If you're running a newer Linux kernel, you're encouraged to try it out and show feedback.

MOFI RAHMAN: The CNCF has announced a new incubating project, Kubescape. Kubescape joined the Cloud Native Computing Foundation sandbox in November 2022, and has achieved a number of milestones since then. The open-source Kubernetes Security project is designed to offer comprehensive security coverage through the entire development and deployment lifecycle. It provides posture and vulnerability management and automatic hardening policies. The Kubescape operator is a set of microservices that monitor Kubernetes clusters from within.

KASLIN FIELDS: The OpenTelemetry community announced the beta release of the OpenTelemetry Go auto-instrumentation project. OpenTelemetry Go auto-instrumentation allows developers to collect traces from their Go applications without requiring manual code modifications or rebuilding binaries. By dynamically instrumenting applications at runtime using eBPF, this project hopes to lower the barrier to adopting observability best practices and provide deep insights into your application's behavior.

MOFI RAHMAN: The CNCF has introduced new guidelines for creating Phippy & Friends books. Phippy the giraffe/PHP app is the main character of "The Children's Illustrated Guide to Kubernetes," a fun children's book-styled introduction to Kubernetes, originally produced by Deis. Deis was acquired by Microsoft in 2017, and the book and its characters were donated to the CNCF a year later, in 2018.

With the donation, the CNCF licensed the characters under the Creative Commons license, making them available for use in a variety of community related spin off books about other open-source projects, cool new use cases, and more. The new guidelines separate Phippy books into two types-- project-related books and Kids Day books. Maintainers who create project-related books will now also have book signings at KubeCon events, coordinated by the CNCF.

KASLIN FIELDS: And that's the news.

MOFI RAHMAN: Lior is a Software Engineer Lead at Google Cloud, focusing on GCE, Kubernetes, and service mesh. He's a leading contributor to Gateway API and the maintainer of ingress to Gateway. Welcome to the show, Lior.

LIOR LIEBERMAN: Hey, I'm excited to be here. Thank you, Mofi.

MOFI RAHMAN: So other than what I have said about your background in the intro, anything else you'd like to add for the audience?

LIOR LIEBERMAN: No, yeah, I'm doing service mesh at Google. I've been doing a lot of Kubernetes work recently, mainly in SIG networking. And yeah, as I said, I'm excited to be here and talking about Gateway API.

MOFI RAHMAN: So you said you've been working in the Kubernetes networking SIG. How did you get involved with the Kubernetes open-source project itself?

LIOR LIEBERMAN: Well, actually, fun fact-- I actually-- back in the days I used to avoid companies that relied on Kubernetes because I found it frustrating and over-complicated, learning all these abstractions, especially when debugging and knowledge sharing with others. So it was a real pain, and not even talking about companies that weren't even containerized. So their members started working for a gaming company. And I experienced some of these painful challenges, which Kubernetes happened to solve.

So for example, I think-- I remember we, the DevOps team, had to be very involved in the rollout process, and there were some custom fragile scripts anywhere taking EC2 from the load balancer, running tests, sending requests, and bringing it back. So everything was really fragile.

And I also remember, the whole idea of [? self-healing ?] wasn't there. So I remember, many nights, waking up to just run NGINX reload to just fix out some web servers, I still remember the pager tone. And basically, I was reading, and all of these were-- just happened to be solved in Kubernetes, so it was really exciting.

And fast forward-- when I started working at Riskified, we really doubled down on Kubernetes. And we had a strong support from R&D, SVP back then. So it went for a journey that-- you're just migrating 400 services to Kubernetes, some of which we-- of course, were split out of a giant monolith.

And that's where I really got to learn the ins and outs of the system and how powerful it was. And I think-- I even think that most importantly, I got to learn the power of that community. We had issues. We posted there, whether it's GitHub, whether it's Slack. We saw so many people eager to just solve it or kind of contribute their view of how they get a workaround or something.

And I remember, actually, one thing in particular-- we were a relatively early adopter of Argo CD, I think was back in 2020. And we really wanted to streamline and automate the process of creating Argo applications. And this basically, wasn't, exist, but it was sub project called Argo CD Application Controller, which basically was used to streamline this process and remove bottlenecks from the DevOps teams. And we didn't have what we need. So we just developed something internally, and we just say, OK, maybe we just put it back to the community.

And since then, I got the whole view of how it works and how smooth it was and-- yeah. And when Google came out and gave me the opportunity to work here, I figured it could be a good opportunity to actually being one of these people who more frequently work on Kubernetes, who solves users' pains. And that's what leads me to what I'm here today.

MOFI RAHMAN: Yeah, I mean, Kubernetes itself is a fairly big project in itself. But even within this project, I feel like networking is probably one of the more complex because it just touches so much surface area of the project, and it is-- so much of that actually needs to happen outside the Kubernetes core. What has that experience been like working in the networking side of Kubernetes?

LIOR LIEBERMAN: Yeah, I think not just-- I really resonate with that. I wouldn't say it's just complex. It's even, sometimes, very intimidating to just go to the whole Kubernetes code base and understanding where should you put your contribution, where should you look for it without the, really, expertise and people that were there for a long time.

So Yeah, I think that's where we can see more and more projects. Two of which come to my mind right now is the Gateway API, obviously, and Network Policy API, that started to being developed outside-- what we call out-of-tree-- so outside of Kubernetes/Kubernetes.

And it provides value such as you could fastly iterate on it. You could have more contributors who easily can contribute without trying to learn the whole code base, ecosystem testing machinery, everything. And yeah.

MOFI RAHMAN: Yeah, and for our audience that are listening, what Lior mentioned about in-tree and out-of-tree-- Kubernetes, being a project of the size it is-- contributing everything to the core of Kubernetes requires so much. I mean, for good reason-- red tape, at times, for getting new things into the actual core of Kubernetes, developing things out-of-tree oftentimes is the easier way to actually get mindshare, not only, but also velocity that a lot of projects need to show the idea that there is-- it is actually going to work, not rather than just like trying to go through-- what is the word? Red tapes of merging something into the core of Kubernetes.

So we have seen some really great projects that came out. Even from within the core, a lot of the things-- a lot of the vendor stuff have been taken out of tree over the last two years. I remember one of the commits that had, like, 1.2 million lines of deletion [LAUGHS] from the core that took out all the vendor-related specific codes out of the tree to-- in-tree to out-of-tree. So that has been great.

But we are here to talk about some of the network-specific things. And most of our viewers and listeners probably already know about some of these words. If not-- but just for folks-- in case they wouldn't know, in Kubernetes-- tell me a little bit about the Ingress API and why we're here talking about Ingress to something else.

LIOR LIEBERMAN: Right. Well, yeah, I mean, the Ingress API, I think, is very, very widely known. And it basically provides a capability to expose and control how external traffic from application access services in your cluster. So the Ingress lets you define rules like HTTP and HTTPS traffic. It can handle things like host-based writing, path-based routing, SSL termination. So it's pretty useful. And it was designed to be simple. So I guess it was definitely a win back then.

While Ingress is-- was and is useful still, it's a bit limited. And basically, that's why Gateway API was created. And yeah, we can talk a little bit more about the limitations. But yeah, I think people need more things other than what Ingress provides.

MOFI RAHMAN: So what would be some of these key limitations of the ingress API that you mentioned?

LIOR LIEBERMAN: Well, as I said, Ingress lacked a lot of core features. You could do useful things, but pretty simple things, which led to a lot of-- providers, implementations put custom extensions everywhere, which were usually in form of annotations, which we all know happen to be very, very, very messy. And those extensions were not portable because you had-- for example, you could set up your AWS Load Balancer config as an annotation.

But then you want to do more. you want to start using GKE. You want to start using, Istio, other projects. You cannot really port it and just use the same Ingress. And it also had lack of protocol diversity. As I mentioned, you could do HTTP and HTTPS, but for example, TCP wasn't there-- not even gRPC and things like that.

And lastly, I think it also had insufficient permission model. And when we talk about Gateway API, we probably can address why the permission model is better. But yeah, I mean, it was so many different things that different personas should manage being managed in the same resource. So it was just very hard to delegate proper permissions for teams in organizations.

MOFI RAHMAN: So you mentioned that we have, now, a newer API-- well, new in comparison, over a couple of years already-- the Gateway API. So tell me a little bit more about the Gateway API and maybe some of the design philosophy of the Gateway API and why it was built the way it was.

LIOR LIEBERMAN: Yeah, I mean, if I need to, maybe, put it in one sentence, I'd say a Gateway API is the newer generation of Kubernetes Ingress-- load balancing and service mesh APIs. That's only if I need to capture it in one sentence. But I think the real value of Gateway is the composability of different pieces in the API. And as we mentioned above, the persona-focused model, which addresses the insufficient-permission model that ingress had.

And I think I don't want to go too deep into the structure. There are a lot of good KubeCon talks, podcasts, blog posts before on Gateway. Who are the personas we focus on? What's the API structure? But the different pieces, the different resources-- that combined API lets us use it much more efficiently.

MOFI RAHMAN: And we actually had Rob Scott talk to us about the Gateway API when it was in beta in episode 186 of the podcast. So if anyone wants to hear about, maybe, an earlier iteration or one of the beta iteration of the gateway API, they could go listen to episode 186, where Rob Scott actually talked to us about that.

And even that was about two to three years after the initial announcement of the Gateway API that we're going to work on. So gateway. API have been in the works, actually, for a while. In 2019 is one of the first time-- in one of the KubeCons, they announced the whole Gateway API. So we are already looking at, like five, six years in the making.

LIOR LIEBERMAN: Yeah, and Mofi, if I may-- which, by the way, was initially started as the Services API. So for the people that were around for a while, they probably heard the announcement of Services API. And then Rob was one of the people who also led the renaming and all that.

MOFI RAHMAN: Yeah, I mean, again, I think naming is probably one of the hardest problem of our tech, because in the beginning days when I was like personally looking up things about Gateway. I was running into things like API gateway, because, again, Gateway API and API gateway are so close to each other. So I would run into things like, oh, if you have APIs, this is how you create an API gateway. I was like, that doesn't sound right.

So as of now, after a couple of years of it, and with enough KubeCon talks published on the Gateway API topic, if you google Gateway API today, you'll probably get a pretty close to the right answer, rather than just getting a bunch of different APIs gateways.

So you mentioned, also, that now Gateway API takes somewhat of a different approach of how to design your external traffic coming into your cluster. But for a lot of our users and a lot of the people using Kubernetes for a long time, obviously, they always had to have some ways to have people access their application. So a lot of that application might already have a pretty robust ingress with a number of annotations, as you mentioned, for different services and things.

What would be the process, then, to think, OK, I think this whole Gateway API thing sounds cool; a lot of new services and features that are coming with it; I like the ability to have this permission control system that comes with it as well-- what does the migration path look like for users?

LIOR LIEBERMAN: I think even before we jump to talk on the migration, there's also-- one additional thing I'd like to mention is that the Gateway API has, also, this very impressive conformance test system, which we've built. And we also, I think, have close to 30 implementations. So not even the advantages we touched before-- it's also very, very like well tested, widely used.

And for those who have these ingress configurations, which are, I guess, a lot of the audience, a lot of the listeners, I think there's this tool called ingress2gateway, if you haven't heard of it, which was designed to at least initially provide a good starting point to migrate. It's not designed to provide you a comprehensive magic tool to just go and read all your Ingress configurations and get the Gateway API resources, but it is designed to give you a good starting point, making the migration less intimidating.

So as I said, it started with addressing only simple ingress configurations, taking all them-- output Gateway API configs. But as we all know and as you say, Mofi, a lot of people have custom annotations. A lot of people have CRDs. Some. Implementation's say, oh, annotations is weird-- are weird. Let's do a CRD that will be in addition to Gateway.

So some implementations had a TCP Ingress CRD, because we mentioned ingress didn't support TCP, for example. So we actually added to ingress2gateway this extensibility, continue offering the spirit of Gateway API being extensible, and implementations-- providers can just plug their conversion logic in the tool, which would convert annotations, custom annotations, and CRDs to those core features we support in Gateway in core APIs, which I think is pretty useful.

And I just wanted to highlight that we already have some major implementation supports the tool. Obviously, not all of the metrics of possible features that it can have. But we have S2. We have GKE. We have Cilium. We have Kong. And we're seeing more implementations coming on board and putting the time to implement that, which is obviously driven by user requests. So if you do have some use case like that, make sure the implementation you're using is aware that you are looking to migrate your configs to Gateway API.

MOFI RAHMAN: We will make sure to link the ingress2gateway repo in our show notes. So if you have any use cases or just want to check it out-- what are the implementation and more information about the ingress2gateway-- that would be in the show notes below.

LIOR LIEBERMAN: Yeah, that's great. And I think one of the other things that the tool gives you, which I wanted to highlight, is it gives you a comprehensive notifications table that basically attempts to let you know what was the conversion logic. So what was converted to what, and warns you from things that, it skipped, whether it's a feature that's not supported in Gateway, whether it's just a feature that the implementation didn't implement yet in the conversion.

So you have the Gateway API resources equivalent for the Ingress resources that you had. And you also have some kind of a notification that attempts to say, hey, what was converted, whatnot, and what you should go ahead and complete yourself.

MOFI RAHMAN: Yeah so I feel like the out-of-the-box ingress that comes with Kubernetes, for the most part, probably will have full coverage in the Gateway API. But as you mentioned, people have been using annotations, some other CRDs, extension, other things, on top of Ingress, that may or may not be fully implemented in gateway.

Can you think of any kind of major use cases of that nature that people are saying-- or people are finding any challenge with moving to Gateway, other than the actual task of moving your configurations onto Gateway? Is there any use case where Gateway just wouldn't support the things people have been using Ingress for?

LIOR LIEBERMAN: I think if we do want to go and dig up those cases, we'll probably need to go to things other than the Gateway API repo, which, by the way, is some kind of-- a misconception kind of thing, where people would find these issues opening up on Gateway, because if I'm a user and I'm going to have a case that-- a Gateway API implementation of my implementation is not supporting what I need for Ingress, I'll probably go to my implementation page and put it there.

And one of the things we've been seeing is ingress NGINX positioned in the center of it. Ingress NGINX-- I think I could comfortably say it's, in my opinion, the far most adopted implementation for ingress and Kubernetes. And I think there's a majority-- I can't say the majority, I don't know. But I think a lot of users are using Ingress NGINX-- like, definitely a lot.

And the Ingress NGINX has a lot of annotations for everything. And we're going to see a lot of examples there, that's for sure. But I know Gateway does not support all of these annotations, partly because I implemented some of the logic to migrate from Ingress NGINX to Gateway implementation.

But there are some good news if we missed it. So Ingress NGINX project actually announced their intention to actually start a repository for a new implementation of Gateway API. And it's called ingate. And it was announced in the community meetings as well. Moved to-- work on the ingate. I know there has been Kube contacts and will be more Kube contacts in London soon about ingate.

So I think we are in the good direction of addressing all the cases for Ingress NGINX users, which again, I think is a very big portion of the use cases you mentioned that are not yet supported in Gateway.

MOFI RAHMAN: So this is awesome that the community is slowly building out all these Gateway implementations. For pretty much all the use cases people would be needing Gateways-- that people are using Ingresses for. Now they would be using, hopefully, Gateway for But I did hear that Gateway have more features than just routing, like just getting your application-- traffic from outside into your cluster. What else is Gateway doing in terms of routing Cloud native. Application?

LIOR LIEBERMAN: I think you got it right. Gateway is definitely more than an Ingress replacement. Gateway also supports service mesh APIs, which some of our audience probably may know as GAMMA or Gateway for Mesh. You basically achieve that by attaching your routes, the same exact HTTP routes or whatever other routes, gRPC routes, that are used to be attached to Gateway, but you attach them to services.

So this initiative, GAMMA initiative, started a while ago and addressed this as a primary case, because imagine, we have a lot of features that are basically the same, whether it's L7-- L7 Ingress or L7 in the mesh. So features like path-based routing-- why would it be unique for a request coming from Gateway? Requests mirroring, all that-- we do see a lot of similarities between the traffic patterns. Obviously, there are some niches where it's not the case. But yeah, Gateway is definitely more than an ingress replacement.

And in fact, there are some implementations that only support the GAMMA-- the Gateway for mesh APIs-- for their implementation. So I'm sure a lot of the listeners are going with the Istio Ambient right now, hearing about it. And just to confirm, Istio Ambient does not support other APIs than the GAMMA APIs for many cases. And we have more implementations again. So I think we're definitely going to see more mesh implementers, mesh projects just supporting the gamma APIs.

MOFI RAHMAN: So it seems like Gateway API and service meshes seem to have a number of overlaps in terms of features they can provide. What would be a reason to have a service mesh at this point, with Gateway API evolving to do a lot of the service mesh-like things?

LIOR LIEBERMAN: Right. So-- and just to clarify, the service mesh is here. The implementation would be either using today's implementations, which I can say today. But previous implementations where we had custom APIs-- like, custom CRDs-- but now they all can support the Gateway APIs. And obviously, we need to evolve more and more use cases in.

But I mean, in the high level, why even having a service mesh-- that's a good question, and probably a broader question. But I think service mesh starts to be very handy when you start having a decent amount of microservices in your cluster, and when you start having some needs for advanced networking capabilities.

You want to do canary traffic splitting. You want to do some other L7 networking functions, percentage-based request mirroring, or to do rate limiting. And all that without even needing to change your application code. So this is where it becomes useful. It becomes useful when you want to understand what happens to the requests that come to your cluster. So telemetry and observability is a key here. And also when you start to care about security and when we start care about identity-based authorization.

And not IP authorization. It's not network policy. It's identity-based authorization because you have identity for every workload running in your mesh. So yeah, I think this is mainly where service mesh would be very useful for you to deploy.

MOFI RAHMAN: Yeah. So I think that I had some of my ideas about service meshes were the additional things you mentioned, like metrics and observability, that service mesh gives out of the box for microservices-based application. Having been part of this, I would say, almost like a journey of converting-- or working in the networking SIG for Kubernetes, what are probably some of the misconceptions you have seen about the Gateway API in the community that you would like to use this moment to help clarify?

LIOR LIEBERMAN: Yeah, well, a common one I hear a lot is people think that Gateway API is not core Kubernetes, partly because it's not installed by default or other reasons.

But I want to clarify that the API is a core Kubernetes API. I mean, it depends what you call, core, but it's a Kubernetes API. And it's just [? tough ?] to decision to be installed and managed in a CRD way, out-of-tree, as we touched in the beginning, partly for faster iteration and for flexibility, for having easier contributions. Again, people don't need to of learn the whole Kubernetes repository structure.

And by the way, there are some discussions in flight right now-- whether we can include those CRDs as part of the default Kubernetes installation. So assuming those land from newer Kubernetes versions-- depends when those land-- you're going to have this Gateway API installed by default, like Gateway API CRDs. Again, not in implementation, but the CRDs would be there, so you can just start using them, assuming you have the implementation installed.

I think the other misconception I could think of is-- I'm talking to people on KubeCon, talking to friends, colleagues, and they say, it's just too complicated for what I need. And I think we often think that our specification is simple enough, but it often evolves.

And this-- there is a very high chance that this one additional feature that we're going to need is not supported with Ingress. And that's where you start having Ingress in all your-- all your environment. But then you say, oh, now I need to do all the migration because I did not pick Gateway to begin with.

MOFI RAHMAN: Yeah, so I think the follow-up question to that for me would be, then, at this point in 2025, when we're recording this, would you recommend anybody who is starting a new application on Kubernetes and they have networking needs for external traffic-- should they reach for a gateway as a default?

LIOR LIEBERMAN: I would say, definitely, yes. And now, are there any distinct use cases where Ingress is better or recommended? I would say that if you absolutely believe that Ingress is all you need and you're 100% confident and sure about it, just stay with Ingress. But I think there's an increasing number of features that are already supported only with Gateway, and not with Ingress, increasing number of implementations that only supports Gateway and not ingress.

And by the way, if there is something that's missing from Gateway API and you would use Gateway only if this feature would exist in Gateway API-- so let us know, and definitely going to work on this.

MOFI RAHMAN: So what you're saying is-- and I don't want to put words in your mouth, so I'll let you answer that. What you're saying is Gateway API is ready for prime time.

LIOR LIEBERMAN: Well, I'd say prime time is already here. I can say that many thousands of users already use Gateway in production, so definitely, prime time is here. And I'll also add that this is only the beginning. I believe if some of the short-term contributions are going to land, like the one that-- installing CRDs by default, that we talked about, this would even grow these thousands to an even bigger number.

MOFI RAHMAN: And there have been some work, especially in the-- I forget the name of the SIG, but SIG serving, I think?

LIOR LIEBERMAN: Yeah, Working Group Serving. Yes.

MOFI RAHMAN: Yeah, Working Group Serving. And there have been some discussion about things like Gateway API inference extension. And again, I probably have changed my mind on this quite a bit, but I had a thought about serving large language models and AI models on Kubernetes, which was just another type of application.

But since then, seeing how much of a challenge some of the accelerated workloads are, I've probably changed my mind much on that particular hill, I would say. So the Gateway API Inference Extension-- tell me a little bit about it.

LIOR LIEBERMAN: Oh, yeah. I anticipate that it will come. So just to clarify, the Gateway API Inference Extension was starting with a collaboration between W Serving and SIG network. And I think it's formerly owned by SIG Network now. I'm not sure. It's not that interesting, as well, who owns it.

But I think what it gives is it's basically-- it is one of the way of extending the Gateway API, which-- the Gateway API claims to be a very extensible. And it basically come and address those people with inference needs on Kubernetes. So assuming you're a company-- you're running your models on Kubernetes. And more specifically, without getting too deep for our ML audience, you're running with lower adapters, so you're using lower adapters to tailor your model for specific tasks.

Now, for people who are using some frameworks like vLLM that is basically tailored for all these-- for working with LoRA and everything. So you get a request through the gateway, right? Let's say a user put a request. It gets to the Gateway, and it needs to find the right model server to get to.

So traditional load-balancing algorithms would just do round robin or list request-- whatever load balancing you've been using. But apparently, if you're, basically-- you get a request. The request is requesting a specific LoRA adapter. If we're able to get this to the pod with a LoRA adapter already loaded and basically save the time that our framework is reloading the adapter to memory, this appeared to be very, very efficient. It increases the throughput significantly. It reduces latency as well.

So this is what the project kind of came to address. And then this extension would probably-- let's you-- for the people who are familiar with the Gateway API structure. So this project would let you target this new inference pool API from an HTTP route. So assuming you get a request for /completion, and you want to attach to the Gateway-- but instead of just routing it to a service, you're going to route it to this inference pool, special API special object, which is going to do this selection for you and have, basically, more smooth and more efficient pod selection.

MOFI RAHMAN: Yeah, that definitely would be super useful for that LoRA use cases, and maybe even more as more LLM-based things come about, more research happens in terms of how do you do very specific tasks, or even maybe-- again, this is probably not even in the feature catalog. But as agents become more prevalent, you probably would even be able to route to specific agents and things of that nature.

So this is going to be the final question. And this is a bit of a future prediction. So in a couple of years, in a future episode of the podcast, we'll come back and see how much of the prediction layer got right. So this is for all the marbles. [LAUGHS] Where do you see the Kubernetes networking evolve in the next few years? And again, more focused on where do you see the Gateway API fitting in the future iterations of the Kubernetes networking?

LIOR LIEBERMAN: Well, when you think of Kubernetes networking, you're thinking about services. You're thinking about Ingresses. You're thinking about mesh networking. So this is-- when you say Kubernetes networking, that's what I'm thinking about. Now, to clarify, that's where Gateway API is today. And therefore, I expect a huge portion of SIG network bandwidth would be invested in gateway in the coming five years.

There are also people who started working on cluster IP gateways. And I'm going to refer to this lightning talk from Tim Hawkins back in KubeCon Chicago, which indicated why Service API's flawed and all the mistakes there were with Service. But basically, these cluster IP gateways are a replacement of Service. And obviously, don't quote me on the name. The name is probably going to evolve, as we all know. But this would likely be developed as part of Gateway. We have, already, issues open in Gateway API. And this would unlock a lot of limitations exposed by the service APIs today.

MOFI RAHMAN: Well, I think, yeah, that was a very solid prediction, and more than prediction-- I feel like having been involved in the project and seen the trajectory, working with insider information, so to speak, with [LAUGHS] what is going to happen. But I am super excited for the usage of the Gateway API and the thing you mentioned about CRDs being installed as part of the Kubernetes distribution, with the base installation of Kubernetes and Gateway becoming almost the default thing people reach for. That would be exciting.

Having used Gateway API for certain use cases, it's just-- the user-level permission boundary-- it's awesome to have the responsibility segregated in the right way, with cluster-level permission versus user-level permission.

So not even with all the features. Just that as an application developer, I can just think about my roots and not having to think about all the other permission for the actual service entry and all that things. That itself is worth the price of admission for me. So all the other things almost feels like a bonus. So super excited for the evolution of Gateway and seeing more adoption. And hopefully, in a couple of years, we'll come and say that Gateway API is feature-complete and you have no reason to not use Gateway for anything.

LIOR LIEBERMAN: Yeah, I mean, it's mostly feature complete now, I guess. I mean, it's feature complete, definitely. But you cannot address-- I mean, people are creative out there the community. Know, they're getting a lot of creativity, which is, by the way, some of the main driver for Kubernetes and everything. But addressing all the features and standardizing it in one thing-- It's just not feasible and probably not encouraged.

And as I said, if there's a feature that you would like to have in Gateway, and that's the only thing that would brought you to use Gateway, definitely open an issue. Definitely reach the Slack. And we'll make sure to have it supported.

MOFI RAHMAN: And I think that's a good place to end this conversation as any. So thank you so much, Lior, for spending your precious time talking with us and letting us learn a bit more. I surely learned a bunch of things that I did not know, which I'll go back and research a bit more to learn and understand a bit more. So thank you so much.

LIOR LIEBERMAN: Thank you Mofi. I was happy to be here.

KASLIN FIELDS: Thank you so much, Mofi and Lior, for that interview. The Gateway API is always something really interesting to talk about. The sessions at KubeCons are always packed. It can be hard to get into the rooms.

I was actually moderating one of those sessions one time, and they almost didn't let me into the room because the room was full. And I was like, I have to do things in this session. [LAUGHS] And luckily, they did let me in. But those sessions are often full. So always excited to hear more stuff about the gateway. Ingress is such an important feature of any workload that you're running in Kubernetes, so it's always good to learn more about it.

And networking, as you said at the beginning of the episode, is so critical to everything that Kubernetes does. It's quite complex because it underlies everything. So I've been out for the last couple of weeks. Mofi, tell me about what you talked about.

MOFI RAHMAN: Well, before I do, I would say one quick life hack, if you're struggling to get into the Ingress or like the Gateway talks, you can just like reach out to the maintainers and ask them for an interview, and they will tell you all about it. So that's just one life hack from me.

KASLIN FIELDS: That is true.

MOFI RAHMAN: And if you do a podcast with them, they'll tell you in person-- like, not in person. That is over GVC. But over video call-- but they would just tell you everything you want to know. So this interview-- again, my initial plan was to talk about the ingress2gateway tool that was-- that released to help people convert their Ingress manifests into Gateway manifests.

But having the access to one of the maintainers of the project and someone who contributes to Kubernetes networking, I just kept asking more questions, and we kept going into a lot of the Gateway things. And Lior was gracious enough to answer all my questions, so that's the episode we got, and I think that was better for it, because we kind of got like a much nicer, wrapped picture of the entire story of what gateway actually looks like.

And the other bit of question I wanted to as-- and just to get his opinions on, albeit-- could be a little bit biased because he works on the project, but I just wanted to get his thoughts on whether or not-- how is the community feeling about Gateway being, quote unquote, "production-ready? So that's another bit of questions we talked about in the chat, so.

KASLIN FIELDS: I liked at the beginning, you all actually talked about-- Kubernetes history is one of my favorite things, you know? Very big Kubernetes history buff here. And you all talked about the removal of all of the cloud provider code, those million lines of code that were removed from Kubernetes last year, and how the Gateway API relates to all of that. It's within the same vein as this trend in Kubernetes of moving anything provider-specific outside of Kubernetes itself so that it's not tied to the Kubernetes release cycle, which is, I think, a really good pattern. And Gateway also does that, so it's always interesting to talk about that.

MOFI RAHMAN: Yeah, it is a great pattern. But I think one of the challenges with that also, is, that as of now, at least, Gateway does not come pre-installed as part of your Kubernetes distribution. So for many people, it seems like you are pulling in something, like an external something, that may not feel like, quote unquote, "original Kubernetes."

So that's something we actually also talked about, that in the future releases of Kubernetes, we're going to figure out a way to set a bunch of CRDs as part of the installation as well because Gateway is part of core Kubernetes. It's just a decision being made to make Kubernetes more extensible by having these extension points, instead of having everything in tree, so to speak.

KASLIN FIELDS: That is something that's come up in open-source conversations quite a bit lately. And I like the idea of including CRDs with the installation, but not including them in Kubernetes itself. That's been one of the sticking points in the conversation because one of the reasons that all the provider code was moved out of tree to begin with is the more people you have to coordinate for a release, the more difficult it is. So keeping them as separate release processes, but then installing them together-- that could be good.

MOFI RAHMAN: And also, the-- I think not just the release process, but also testing-- it takes longer because all the vendor code needs to be tested. All the code that is in the main trunk needs to be tested.

KASLIN FIELDS: Which costs more money. [LAUGHS]

MOFI RAHMAN: Yeah. So one thing I think people tend to forget is that Kubernetes is an open-source project that runs on donations-- and not just money, but also time from people that works in various companies. But there is also a ton of just individual contributors that does not have a big company association. They're just donating their time and making this project survive and work.

So. Making the project more sustainable long-term in the next decade that we are already in of Kubernetes-- so that is going to be crucial to make sure that people can continue to build and add new things. And being able to add things in a faster speed until it needs to be merged into the main path-- it is extremely crucial.

I think Gateway API is already, from idea to now, about five to six years old. But I can imagine if they tried to build it as part of the main path, it probably would have taken, like, three to four years before the API to get to the point it has, because they could have their own release cycle build much quicker. And I think overall that was the correct decision, although at times can create some friction because things are moving separately from each other.

KASLIN FIELDS: Some interesting conversations in open source about that. Any other highlights that you want to call out from your interview?

MOFI RAHMAN: Yeah, I think the couple of big ones we talked about in the interview is that the ingress2gateway tool is adding a lot more features to help you move any types of Ingress configuration that you have, to move it to Gateway. And the maintainers there included are open to your feedback.

If you try to take your existing Ingress and the ingress2gateway does not do exactly what it needed to do-- and if that's the use case, a lot of people are having, they're open to having the discussion. We'll link, basically, the GitHub project in our show notes. So if you want to go there and post issues, it's going to make the tool that much easier and move the migration over to Gateway that much smoother for everybody. So that's the one big one.

The other one is that if you are building any new application in 2025, today, choosing Gateway as the first choice is probably the way to go. There are very few use cases where Gateway would not be the one that does your networking coverage.

And on top of that, if you are looking for ways to handle any other traffic other than TCP, Gateway is probably the way you should go anyway, because people try to make Ingress work using extensions and other external CRDs. That makes it extremely unportable. What is the-- not portable. That is the opposite of portable?

Because these CRDs are tied to a specific installation of the Ingress software or your cloud provider. But Gateway makes everything very extensible, so Gateway already supports UDP and TCP and gRPC all out of the box. So again, 2025-- any new application you're building-- probably look at Gateway. And if that doesn't cover everything you need, that is a good time to actually bring it up with the Gateway maintainers that, this is my use case. I'm seeing this not being supported in Gateway. What can we do?

So over the last few years, they have been talking to the community and adding more and more features. So we are getting to the point where almost feature-complete. We'll never get to 100% because nothing ever is. But for majority of the people, Gateway is probably the right first call.

KASLIN FIELDS: And we talked about how Gateway involves some provider code, but we didn't talk about why that is. In past episodes about the Gateway API, we have talked about it. But-- so there was the Ingress API built into Kubernetes. But realistically, if you're talking about traffic that's coming into your applications and the networking underlying it, then you're going to have to interact with whatever creates that networking for you [LAUGHS] in order to implement things as effectively and robustly as possible, so. [LAUGHS]

MOFI RAHMAN: Yeah, so Gateway API mostly provides almost like an interface that-- anybody that can implement. For example, on Google Cloud, we have our multiple implementation of the Gateway API. But there are a number of providers that can do that. So there are Kong There's traffic. They all have implemented their own version of the Gateway API.

And one big one we also talked about in the conversation is that NGINX is-- actually, they have announced that they're going to build out their own gateway. So NGINX, easily, is the biggest Ingress provider in the Kubernetes space right now. And a lot of the users-- a lot of the missing links of what people think about Gateway are features from NGINX.

So if NGINX folks actually implement the Gateway API, that actually lowers the barrier even more and make it even easier for people to adopt Gateway. So the Gateway was by design in a sense, that every gateway is implementing an API that anybody with the resources to build out can build out.

KASLIN FIELDS: You'd call it the same way. Just-- it's using the things that your thing is built on.

MOFI RAHMAN: Yeah.

KASLIN FIELDS: [LAUGHS] And so you have to install things for that. [LAUGHS]

MOFI RAHMAN: Yeah, which has been the design philosophy. And again, like let's say on Google Cloud or your cloud provider-- if they're providing a Gateway API that can be bundled in as part of the cluster creation process, you only need to decide on a unique gateway If you're manually installing Kubernetes on your own data center.

Even then, you could probably bundle in in as part of the cluster creation process instead of having to install a new gateway. The percent of people that needs a different gateway within the same cluster-- it's actually not-- probably not that high. Like, your company or your team or your organization, or whatever cluster you're creating-- you probably want to be on the same style of Gateway anyway. So--

KASLIN FIELDS: Yeah, having both Ingress and Gateway would just confuse things, but the barrier to entry is really not that high now for Gateway.

MOFI RAHMAN: Yeah, and that's the gist of the conversation. I think last time we had a Gateway episode in the show was, I think, 2022-- one of the very older episodes. And that was some of the beta features that was being talked about. So it's been a while. I think Gateway has evolved quite a bit in the time. And the community itself-- there's a lot more options for people to use gateway now.

And so I'm excited to see where Gateway goes in 2025 and beyond, and how we can have more and more people starting off as-- like, when I first started in Kubernetes, Ingress was the only way to have networking. And in the middle, for a couple of years, people had to-- oh, I see a lot of information about Ingress, but there's this new Gateway thing. Which one should I choose? In a couple of years. I hope anybody that is coming to Kubernetes sees the Gateway API as being the first choice for building these things out.

KASLIN FIELDS: So if you out there listening are in that migration path right now where you know that Gateway is cool or you've just found out that Gateway is really cool, but you're using Ingress right now and you need to transition over, look forward, in 1.33, to new tools to help you make that migration from Ingress to the Gateway API. Thank you so much, Mofi, and we'll catch you next time.

MOFI RAHMAN: Thank you.

That brings us to the end of another episode. If you enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media @KubernetesPod or reach us by email at <KubernetesPodcast@google.com>. You can also check out the website at KubernetesPodcast.com, where you will find transcripts and show nodes and links to subscribe. Please consider rating us on your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.

[MUSIC PLAYING]