#261 October 1, 2025

GKE 10 years and SIG Networking, With Antonio Ojea

Hosts: Abdel Sghiouar, Kaslin Fields

Today we talk to Antonio Ojea. Antonio is a software engineer at Google and one of the core maintainers of Kubernetes. He is one of the Tech Lead of SIG Networking and Testing and a member of the Steering Committee.

Do you have something cool to share? Some questions? Let us know:

News of the week

ABDEL SGHIOUAR: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Abdel Sghiouar.

KASLIN FIELDS: And I'm Kaslin Fields.

[MUSIC PLAYING]

ABDEL SGHIOUAR: Today, we talk to Antonio Ojea. Antonio is a software engineer at Google and one of the core maintainers of Kubernetes. He is one of the Tech Leads of SIG Networking and Testing and a member of the Steering Committee.

KASLIN FIELDS: But first, let's get to the news.

[MUSIC PLAYING]

Google Cloud announced the availability of GKE Autopilot mode inside Standard clusters. Autopilot is a fully managed version of GKE where Google Cloud manages the entire cluster and only charges for the resources consumed by the pods. Autopilot in Standard mode allows users to have the flexibility of switching between two modes inside the same cluster.

ABDEL SGHIOUAR: The CNCF announced the list of KCD events happening during the first half of 2026. KCD, or Kubernetes Community Days, are community-organized and CNCF-supported events happening around the world, and 2026 will see a lot of new locations, like New Delhi and Kochi in India, Panama, Beijing in China, and more. Check the link in the show notes for the full list.

KASLIN FIELDS: Metal Kubed joined the CNCF as an incubating project. Metal Kubed is an application that runs on Kubernetes and provides components for bare metal host management. You can provision OS images and manage machines via the Kubernetes API. And that's the news.

[MUSIC PLAYING]

ABDEL SGHIOUAR: Welcome to the show, Antonio.

ANTONIO OJEA: Hi. Thanks for having me.

ABDEL SGHIOUAR: So you've been around for a very long time. We know each other. I'm going to take us and me and you a little bit on a train back in time. So you're coming from a networking background, right? You did network engineering as part of your career?

ANTONIO OJEA: Yeah. I started as a network engineer. But then the 2010s, or something like that, there was this open source explosion with virtual networks. And I joined a small startup, and I said, I'm tired of being network engineer. I want to be the person doing the virtual networks. And then I started my career in software development, in networking at that time.

ABDEL SGHIOUAR: And then today in Kubernetes, how do you see the industry kind of shifting from the old school hardware-based networking, all the way to virtual networking, software-defined network, and the way we do it today in Kubernetes? How that has been that shift?

ANTONIO OJEA: It's an interesting topic, right, that networking is always, how can I say, siloed technology. People don't like much the networking people because they only call us when there is a problem. And it's evolving right now. Kubernetes has a different approach for networking than the traditional networking. Everything is an API. You don't have much more virtual routers and all these things. Everything is more abstracted.

And there was, two years ago, a proposal from more traditional networking to be implemented in the core APIs. And after we evaluated it, we decided to not proceed with that approach in core because it was too disruptive. So traditional networking still has a room in Kubernetes. We have a working group working on that in SIG network and is using dynamic resource allocation for that. They are progressing.

They did a demo in the last SIG Networking meeting. I think that was a few weeks ago. And they have good results, and we have a good opportunity to solve some of the workarounds that the industry had to do because Kubernetes was not much traditional networking hardware. And it's exciting, right? There is a good group of people working there, and we are seeing results after they moved to dynamic resource allocation.

ABDEL SGHIOUAR: So from your experience, how do you see-- let's say that there is a company that has an established network and established infrastructure with switches and routers and the, quote unquote, "old school networking." And they want to adopt Kubernetes, right? And they have to somehow integrate this API-driven Kubernetes way of doing networking to their traditional networking stack.

So in my experience, I've only seen once with F5, because they have a controller that allows you to sync up load balancing configuration from Kubernetes to the F5 load balancer, which can be a virtual appliance or physical. Is this-- controllers is the way? Is that how people do it today?

ANTONIO OJEA: I think that first, there is a fundamental change in mindset, so they need to treat the cluster as an autonomous system. Everything that happens inside the cluster is their own routing domain. You need to think that in Kubernetes, the main difference is that the networking-- what's the name? The principles of networking in Kubernetes are simple, right? You have one pod, and every pod has to talk with any other pod.

And this is a strong constraint. And when you start to go in bare metal, then you enter this IP exhaustion problems . So my recommendation is treat the cluster as an autonomous system, and then define the ingress and egress point. Then you are able to compartmentalize and to put a good design, so you can start to grow on that.

Then you need to define your integrations. As you say, what do you want to integrate? It's if it's just virtual IPs, that's the simple thing. You can put a no load balancer to handle the ingress traffic, right? But then things start to get complicated, and then you start to have more application level-- let's say, HTTP protocols or API gateways. Then you don't know if you need to go service mesh.

You are very familiar with the gateway API project, and they are solving this problem. They are representing this abstraction so people can handle these more complicated protocols for the ingress traffic to the cluster. Then from the egress traffic, that also happens in bare metal. And there are two kind of problems that you observe there.

One is that people want to apply network policies per pod or per instance. And then we in SIG Networking are working in standardizing the network policy. In the admin network policy, there is another working group. It's a sub project. We are going to try to get to better in this KubeCon. And this is a heavily demanded feature for people from on-premises, because they have all these firewall rules, and they need to apply these firewall rules in a more traditional way. And this API is able to cover that.

And the other problem is the egress traffic. Usually, you have a chokepoint. You have the F5 for ingress, but you may have some firewall vendor appliance, and then you want to send this traffic through this appliance. This is an area that we don't have standardization, but any of the main CNI plugins, network plugins like Calico or Cilium, and implement this functionality with their custom CRDs.

We in SIG Networking try to standardize once. We see that it's possible to standardize, and if not, we encourage these projects to cover these gaps with CRDs and controllers. The functionality is there, and users can use it.

ABDEL SGHIOUAR: So on the topic of egress, I have actually a very specific question. I think that this is something that have been floating around for a while, and I have never seen a standard solution. So if you have a cluster that egresses toward the remote endpoints, you either have to let the entire cluster talk out, or you do a NAT gateway, right?

Typically on cloud, you would do a NAT gateway to route traffic through a gateway. The problem is on the receiving end, the IP address will be the IP of the gateway itself. So now if you need the pods to have identification-- specific IPs. I want this group of pods in this particular namespace to always egress with a known IP address. That, to my knowledge, have never been solved by Kubernetes.

ANTONIO OJEA: Yeah. That's falls into this area that we don't have enough-- I wouldn't say time, but enough push. Because the problem is that that's very implementation-specific. No, this is typical in cloud. You use NAT gateway. But typically in on-premises, you use-- I don't know. You will have your switches, routers, all your internal stuff.

And then the other thing is we talked about the Kubernetes networking principle, that every pod should be able to talk with any other pod without NAT. Then this means that you have a full distributed fabric inside the cluster. And then when you try to put a state on this fabric to say, OK, then from this namespace, you have 1,000 nodes-- can mean every single node needs to share one IP.

And this becomes a very challenging distributed system problem to implement policy routing per node per-- so it's complicated. It's also on the realm of the CNI, that in SIG network we try to not get very deep into that. And that's one of the main reasons why, at SIG Networking, we don't have a standard solution. There were several approaches to create an egress router, CRD or API, but let's say it's not in the top priorities these days, unfortunately.

ABDEL SGHIOUAR: Got it. Yeah. Because I've seen solutions floating on the internet. The CNI obviously is one way to do it. To me, this particular topic is funny because it fundamentally shows how a lot of security on the internet still relies on IP addresses.

ANTONIO OJEA: Yes.

ABDEL SGHIOUAR: Because this particular problem comes up when people say, I have an external endpoint that I'm talking to, and that vendor asked me to give them an IP address to whitelist, right?

ANTONIO OJEA: There is an interesting one-- I was talking just last week about this. Now, with AI ML workloads, it's changing a lot of the requirements. Kubernetes is built with a lot of-- everything is scattered and stateless, and you should scale out and not keep a state. And now, we move to agents and NCP servers. And then everything is stateful, and everything has to live forever and everything.

And one of the requirements I was discussing last week was about this. OK, I want to have an IP. The Pod IP. At the end I don't care. This pod has this IP and its identity. And if I want to move the application, like live pod migration, I want IP to move with it. Many people are not familiar with dynamic resource allocation Went GA.

The main driver was GPU, but the thing is that it expands the possibilities of Kubernetes to make a dynamic resource allocation. It has the primitives to create more complex things. And one of the things that I'm working with a colleague in this GKE Labs. I don't know if it's still open. But we have an organization in GitHub to put this thing, the dynamic resource allocation for virtual IPs.

So with this feature, when you create a pod, you are going to request a resource, and this resource is going to be IP. So dynamically, the scheduler and some controller is going to find, oh, this pod is requesting an IP. So you are able to get an IP connected to this post via this resource. It's still in progress, but now that dynamic resource allocation is GA, we are going to see more of these things that we couldn't do before, or we had to do with some custom annotations and this stuff-- doing with Kubernetes native primitives.

ABDEL SGHIOUAR: Yeah. That's interesting because I came across recently a discussion with a customer about one particular other problem they are facing with AI, which is like they have a NAT gateway on the way out, so all the traffic looks like it's coming from a single IP, but when they have multiple pods scaling up, the place where the models are downloaded from throttles the traffic, right? Because it sees multiple requests coming from the same IP, so it just slows down traffic, right? So the model downloads take even longer because of this single IP out problem.

ANTONIO OJEA: Yeah, we should be able to start being able to model this complexity, right? In this case, maybe you model this-- I want to do a NAT pool. Now, we don't have that way of defining a NAT pool, right? Or maybe some people do a NAT pool in some custom way. Now with dynamic resource allocation, we can model the NAT pool as a resource, as an object, and we can make a deployment to request IP from this NAT pool.

ABDEL SGHIOUAR: Yeah, so you can have multiple IP addresses as the way out, right?

ANTONIO OJEA: Yeah. Then we need to start implementing these projects with these things. But right now, it's native, and it's possible to do that. And this opens up a lot of horizons for networking that we were very stuck on the limitation of existing APIs. And this is going to uncover a lot of new scenarios than the one that you are describing.

ABDEL SGHIOUAR: Nice. I'm going to shift the conversation a little bit. Where are we on the Gateway API project? How involved are you in that, and where is that thing heading?

ANTONIO OJEA: OK. I'm traditionally more involved in the core Kubernetes networking. Gateway API is a subproject and it has a very large scope. So to the point that it has independent maintainers, independent leads, and a SIG network, we completely delegate that to the maintainer, right? We always have to check that it's aligned with the core project. And actually, this morning, I was working that. We had this funny project that is kind of Kubernetes in Docker that was created by Benjamin Elder, and I joined it.

And we have this cloud provider KIND that originally was created to handle load balancer. You cannot use load balancer. I'm implementing Gateway API in cloud provider KIND. And I'm almost finishing with the alpha. So what is the reason? The main problem is that Gateway API covers a lot of space and a lot of complicated space, right? It's just you need to handle all these L7 protocols mainly, and it requires also to deal with TLS. It has a lot of reference between objects.

And the majority of the end users are still stuck on Ingress. And one of the things that we want to solve is, let's get more end user feedback, right? What is blocking the people from Ingress to move to Gateway? And definitely in SIG network, I don't know if we officially declare that, but Ingress is an API that we consider frozen, and we are only developing new features in Gateway API.

ABDEL SGHIOUAR: Gateway API Yeah

ANTONIO OJEA: Yeah. The more exciting one was this about-- this went GA last week. I was doing the API review last week. It's the Inference Gateway, right? That's an extension for handling these models. And I implement a better routing and improve the efficiency. There is a lot of work in that area, and yeah, I will be moving to work more on this area for the next year.

ABDEL SGHIOUAR: Nice. Yeah, yeah. I did a talk about it. There is a lot of interest in the Kubernetes community and people coming to KubeCon. The session that we did with somebody, with another speaker, was pretty packed. There was a lot of interest. I'm going to talk really quickly about something that I know that you were one of the main people working on it. So in release 1.31 of Kubernetes, the Multi-Service CIDR went live, right? And if I believe correctly, you are one of the main person who worked on that, right?

ANTONIO OJEA: Yeah, and this was an interesting feature. So basically, how it started is because in 2020-- I started working in Kubernetes in 2018. I was working in another thing. And when I joined it, I asked, where can I help? And they told me, well, IPv6 is awful. And I said, I know something about IPv6.

And I started to work on that. And I GA'd IPv6 in 2020. And one of the things that was shocking is, oh, I cannot use more than slash 112 for the Service CIDR in IPv6. I say, why? Why is that?

So it turns out that the services, the way that they are implemented, you have a bitmap in the API server. So you increase the size of the CIDR, you increase the size of the bitmap. This bitmap is stored in etcd. And if it grows too much, you don't even have space in etcd, and everything starts to go slower or whatever. So I say, oh, we need to change that. And I started to explore this, and I say, OK, we need to change all this bitmap allocation.

And I started talking with Tim Hockin, API Machinery Leads and other people in SIG network, and say, OK, if we do this, we have also this problem that once you set a Service CIDR, it's forever. Why don't we make it dynamic? And this is starting 2020. And I think that as you say, that at that time it maybe was 121 or something like that. And we GA'd in 133 and beta in 131 or-- yeah, something like that. So it was a very complex change that touches most of the heart of Kubernetes.

It was like doing brain surgery, but we made it. And we have a very positive feedback because people run-- the clusters are not ephemeral anymore, and they run out of space. They need to increase the size of the Service CIDR. We also have people coming to the Slack channel to think about that. So it was really complicated, and we are happy to have it in prod now.

ABDEL SGHIOUAR: Yeah, this particular limitation before the release of the multi CIDR goes to the core of one of, I think, one of the fundamental problems in tech, which is a capacity planning. Because before, the conversation would always be, you have to size the cluster to the maximum amount that you-- but no one knows what that maximum amount is going to be, right?

ANTONIO OJEA: Yeah, and then that's the funny thing, because then you see the different kind of person, right? There is the optimistic person that say, oh, I'm going to put this last 12. And then, oh, I need all this space for other things. And then you have the pessimistic, oh, I'm going to use, I don't know, 10 services. So I put that last 28 or last 27. OK, I ran out of things.

So it is very funny. And this happens. We have feedback from all these cases, and it's a really nice social experiment to see how we have pessimistic and optimistic planning in the world.

ABDEL SGHIOUAR: So I know you spoke about KIND, and I do have one quick question. How does Kubernetes core networking handles the networking difference between running Kubernetes inside KIND, which is running in a container on your laptop, and running in the cloud or running on-prem? How do you reconcile all of these different environments from Kubernetes point of view?

ANTONIO OJEA: That's one of the nice things of the Kubernetes networking model, right, is it requires to get a connectivity across clusters. So you can do-- basically, it boils down to common solutions. Or you use an overlay to create this flat network, or you use a routing that allows to connect each node with each other node. The first one is the simplest one, but it deals with a lot of complexity and performance problems.

The second one is the more performance, but it ends that you need to do better subnet planning and implement the routing and all this stuff. In KIND, because we use the Docker network, we have this flat model, right, this non-overlay, this direct routing. So what we created, Benjamin and I, we created Kindnet that is the CNI that works for KIND that we are actually proposing to donate to Kubernetes.

And with Kindnet, it's very basic-- the routes, the pod subnets in each node, and then you just install a route from each container to this other container with the pod subnet. And it just simply works because the Docker network is flat, so it's just a simple routing problem. And you just need a very small CNI that is able to route correctly between pods and between nodes.

ABDEL SGHIOUAR: Got it, got it. And I'm going to ask you something about performance because I think that-- your work on the SIG or SIG network in general, I think that your work goes beyond just IP planning. Performance is probably one of the main things. And my question is, I never understood why Kubernetes doesn't have-- and humor me here, because I know that this is probably complex-- not complex for you, but-- why Kubernetes doesn't have quality of service for networking?

ANTONIO OJEA: OK, this is a recurrent topic because the main thing is when you talk about quality of services, then you have different requests. There are some people that just want to do rate limiting, right? And then the other problem with networking is that, on the contrary to other technologies, is that this is statistically multiplex, right? So you have typically one outbound connection, right, that you have a VM and the VM has an interface. OK? That's the interface that everybody needs to share. That's your shared resource, OK?

And the way that this is implemented is you implement queuing, right? So all the pods are going to end in the queue of this interface. And then when you mean quality of services, usually you mean, OK, I want this pod to get more priority than this other so the packets go out first. But on the other hand is, oh, if this pod that I want higher priority is not sending anything, I want to reuse this for the other pod. Otherwise, I am wasting resources.

And the main problem is that how do you come to-- the only model that I see that works is with priority, right? You can assign different priorities to pods, and then the priority indicates some preference or something like that. The thing is that then we end in this Kubernetes-distributed model problem is we have every pod and any other pod be able to connect between each other, and we don't know beforehand what are going to be the traffic patterns.

So how do you model this to the user? How do you say, oh, this web server needs to use 80% of the traffic at this time or if it's in this node? So I always have requests. I'm trying to find somebody that works with us to model this better because the main gap that we have is the UX. It's how do we design this system of quality of service so the users can program the things? Because what we have today is an annotation that saves bandwidth, but bandwidth is rate limiting.

And rate limiting per pod, you're in this Issue You are wasting resources, right? Because if nobody is using the uplink, then you are wasting the resources. And Kubernetes is very well designed to optimize and be efficient, and that's problematic.

ABDEL SGHIOUAR: So I guess that I'm going to play the devil's advocate here. I think that the question goes beyond just rate limiting or quality of service between pods because on the node, you have also other things that are fighting for the bandwidth. You have the image pull traffic. You would have your logs and metrics traffic. And so now with AI, these models are huge. You don't want to end up in a situation where a pod is pulling a model so it's using all the bandwidth available on the VM and then preventing other pods. Do you see what I mean?

ANTONIO OJEA: Yeah. That's the other thing, right? It's like with CPU. Remember that when you set the kubelet, you also allocate some CPU for the Kubelet right?

ABDEL SGHIOUAR: Correct.

ANTONIO OJEA: This is the same. OK, in this model that you only have one NIC for control plane, for images, for everything, it's how do you know how much bandwidth can you allocate? Because as you say, before, you have an image that was 500 megas, and you say it's big. But now it's, oh, I'm going to download this model that is-- we are going to end sending back DVD, right, to the server because the image sizes now are all in the order of gigabytes. And then to touch an important problem is the storage. The storage used to be over the network, right? If you connect a bucket or--

ABDEL SGHIOUAR: A disk?

ANTONIO OJEA: Or a disk.

ABDEL SGHIOUAR: NFS or something, yeah.

ANTONIO OJEA: Whatever, and that is going to consume network, and then the storage is very critical to latency. And then if you put something with more priority and start, I don't know, doing whatever thing, it's going to create this thundering herd effect, right? You want to use the network, and then you're going to use the storage. And because the storage is low, you start to reconnect with the network, and then you call support.

ABDEL SGHIOUAR: Yes.

[LAUGHTER]

So actually, the storage, I think, if I can, I think I have a better solution. Because in the old school networking-- again, back to the topic we started talking with-- the way you solved this is you would have a dedicated interface for storage, right? Your server will have one interface for ethernet and then one fiber channel connection going to the NATs. So you isolate the traffic if you want to have low latency for storage traffic, at least, right?

ANTONIO OJEA: Yeah, that's the nice thing of Kubernetes. We don't reinvent the wheel. The question is, then you start to add dollars to the bill, right?

ABDEL SGHIOUAR: Yeah, of course, yeah.

ANTONIO OJEA: They say, OK, I don't want to pay so much for this storage. I just want this to work perfect with the current cost. It's a tradeoff between cost/performance. And as I said, this is something that I'm especially interested, and we are working hard on GKE that's in optimizing this price/performance.

ABDEL SGHIOUAR: I think this is basically where things can start getting a little bit kind of gray area because if you are on a cloud provider, there is probably a solution. We have gVNIC in Google, which is this massive bandwidth network interface, right, which can do hundreds of gigabytes. But not everybody is using a cloud provider, like people are still stuck on-prem with their own Kubernetes or whatever. So that's why I was asking this question.

And so leading toward the AI stuff, right, I guess you talked about it a little bit, how kind of AI is changing a little bit the way you are working in the SIG networking. But where do you see the future? Where do you see this AI taking you in the networking space? Because it's touching everything, including the basic infrastructure components.

ANTONIO OJEA: Yeah, as I mentioned before, if we oversimplify networking into the high level that is Gateway API, that is clear that this requires more efficient algorithms to expose and to allocate resources for inference. We have the low-level networking. That's what we started-- like the more traditional network. That part, we already solved. I've been working during the last two years on that, and I was with Dynamic Resource Allocation.

The original proposal was to use a multinetwork API, and as I said before, it was very complex. It cannot be standardized and would create fragmentation because everybody will need to depend on their own CNI plugin. And with DRA I solved this. I wrote a paper. It's going to be published in one of the IEEE conference about the Kubernetes Network Driver model and how to use it for this problem.

And we created a project at Google that is called DraNet that is solving this problem, right? It's for AI/ML low-level networking. You usually need to use an out-of-band network that is RDMA. You can have an RDMA proxy interface that use ethernet or the InfiniBand interface. And what originally we thought there was a multilevel problem, it was a multi-interface problem, right? Same as in the storage. You don't require to model the storage network.

What you want to require is the pod to attach this NIC that goes to the storage network. And this is the same problem. It's the pod requires to attach this NIC that is RDMA to the pod network. And we solved that. We have a project, and it's working fine. It is working really good, so to the point that I expect to release it in GA for next year, and we are going to do the GA in one month or something. Maybe for KubeCon, who knows.

ABDEL SGHIOUAR: OK. Yeah, because this is actually particularly an interesting problem because a lot of these big models that require multihost inference, for example, if you have a model that you run on top of multiple nodes, then you would need high bandwidth between the nodes for the GPU to GPU communication, right?

ANTONIO OJEA: That's the key. That's exactly the problem, right? When we talk about network-- this is a good anecdote that I have. It's the first time that I get in touch with this was with an HPC problem. They have a networking problem. They were talking with SIG scheduler or something like that, and they say, oh, call Antonio.

And when I went there and say, OK, what is the network here, I was thinking, oh, this is low. And they were talking it's low. It was 10 microseconds or something. I said, are you saying "micro" or "milli"? Micro. Micro is low.

I say OK, this is a totally different problem that we would use the DNC network to deal with. And the more that you start digging into that, then you start to see, OK, how do you solve a problem when you are CPU bounded and then offloading, right? And that's it. That's the key thing. What the GPUs and these NICs do is they upload all the processing of the packets or all the processing of the information to these protocols. They don't even use TCP because it's slow. They use RDMA.

And then you just need to be able, in Kubernetes, to offer the user the opportunity to express, oh, I want to use this application that support with this hardware-- that is, the GPU and the NIC. And also to say we also want to get the optimal performance. That means you need to match the PCI bus for the GPU and the NIC because otherwise, you have a penalty that is important if you don't get a alignment between the intranode architecture.

ABDEL SGHIOUAR: Yeah. It's actually interesting. You touched on a topic that I think we probably should cover in the podcast. We should not need to find someone. A lot of things that people don't realize is how CPU bound when you are on the cloud you are, right, because your performance for storage, your performance for networking, and your performance for pretty much everything else on the node is CPU dependent, right?

ANTONIO OJEA: Yeah, that's the main problem, right. At the end, in networking, people used to get the tables with the bytes per second and other things. But you need to think that you don't have a router. What you are doing is processing the packets on CPU or offloading to Smart NIC. And that's the thing that is going to limit your performance, right? Whatever has to process the packets is your bottleneck.

And usually in networking, you can offload. We have eBPF. We have Netfilter. There are a lot of technologies to offload networking. But then you also want to have this nice feature of, oh, I want to check in this header to send this to this backend or not to the other backend, right? So you have this tradeoff. It's, you can only offload the things that the hardware is able to understand. Otherwise, you need to process in software.

ABDEL SGHIOUAR: Yeah. And I'm just going to throw at you one more question, and then we can close out the conversation. I know that this mostly touches on security, but I'm always bundling security and networking in the same space. What's your take on this craziness of insecure MCP servers?

Everybody's building MCP servers today, but no one seems to talk about authentication and authorization, which drives me crazy. You have an autonomous agent, and you give them access to a bunch of HTTP servers with no limitation to what they can do. Where do you think we're heading there? I know that there is no standard for sure.

ANTONIO OJEA: Yeah. I had this conversation the other day because this is a hot topic, right? And this is when we are start to rethink the lines of Kubernetes and where traditionally, you implement a service mesh kind of service, right, to handle the network authentication. So the question is, how is going to be the future, right? Is the applications, all these frameworks going to embed authentication, or are we going to delegate it to the network?

I don't have an answer. I see the industry moving in both ways in parallel. And I answer that some smart people, person will come up with the idea and say, oh, this has to be done this way, and people will follow. But right now, as you say, it's the Wild West.

[LAUGHTER]

Everybody run things. We are going to get hacked or something like that. And so that most of the-- what is the name? CIO? What is-- the CISO, the security guys, are scared about this?

ABDEL SGHIOUAR: Yeah, CISO, yeah?

ANTONIO OJEA: All the CISOs are scared. And yeah, we are working on that. It's an area with a lot of development. And unfortunately, I don't know what's there.

ABDEL SGHIOUAR: Yeah, it's an industry-wide problem, right? And it's very funny to me. I have been in the industry for 15 years. You have been there for longer than me. It's funny to me today how a lot of these problems are stuff that we have been talking about, like, 25 years ago.

ANTONIO OJEA: That's-- I tell you, I was having this conversation. I say, OK, we are moving back to stateful. And that is shocking because with Kubernetes, everything was is a cuttle You don't use pets. And now it's oh, you have all these pets and you need to take care of them.

ABDEL SGHIOUAR: Yeah.

ANTONIO OJEA: It's funny.

ABDEL SGHIOUAR: I guess the much bigger observation is fundamentally, the problems are things we solved a while ago. Like authorization and authentication, that's not new. We have done this thing for a very long time. It's a different problem in different place, but fundamentally, the solution is the same.

ANTONIO OJEA: Yeah, that's the best-- advice I give to the junior people that come to this is, look, this is not new. It's just change the scale, change some of the environment. But the problem is something that we were working on it for a lot of time. So let's try to use the best practices and experience and tackle this problem.

ABDEL SGHIOUAR: Awesome. I couldn't have ended better than this. Thank you so much, Antonio, for your time.

ANTONIO OJEA: Thank you, Abdel, and hope to see you soon in KubeCon or--

ABDEL SGHIOUAR: I will see you at one of the KubeCons for sure.

ANTONIO OJEA: OK.

ABDEL SGHIOUAR: All right, thank you.

ANTONIO OJEA: Bye.

[MUSIC PLAYING]

ABDEL SGHIOUAR: That brings us to the end of another episode. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media @KubernetesPod or reach us via email at <kubernetespodcast@google.com>. You can also check our website at kubernetespodcast.com, where you will find transcripts and show notes and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.