#246 January 29, 2025
William Morgan is the CEO of Buoyant, the company behind Linkerd. You worked at Twitter before as a software engineer and engineering manager and you have a long experience in the field.
Do you have something cool to share? Some questions? Let us know:
ABDEL SGHIOUAR: Hi, and welcome to the "Kubernetes Podcast from Google." I'm your host, Abdel Sghiouar.
KASLIN FIELDS: And I'm Kaslin Fields.
[MUSIC PLAYING]
ABDEL SGHIOUAR: In this episode, we speak to William Morgan. William is the CEO of Buoyant, the company behind Linkerd. We talked about Linkerd's architecture and use cases and how a company can balance business and open source.
KASLIN FIELDS: But first, let's get to the news.
[MUSIC PLAYING]
Red Hat announced the Red Hat Connectivity Link, an implementation of the gateway API, and Project Kuadrant to automate connectivity between applications deployed on different platforms. Check out the link in the show notes for more details.
ABDEL SGHIOUAR: The schedule for KubeCon and CloudNativeCon 2025 in London is live. The event is taking place April 1 to 4 in Excel London.
KASLIN FIELDS: The call for proposals for the next major KubeCons are still open. For KubeCon China and Japan, the CFP will close February 2. And they are accepting proposals both in Japanese and English for KubeCon Japan and Chinese and English for China. And for KubeCon India, the CFP will close on March 23. And applications are expected in English.
ABDEL SGHIOUAR: Kubezonnet is the name of a new open-source project from Polar Signals. The tool uses eBPF to measure and report cross-zonal traffic in Kubernetes-hosted on-cloud environments. Network traffic between availability zones in the cloud are not typically free. And the tool provides users with the visibility to monitor cross-zonal traffic and optimize their setup to be more cost-effective.
KASLIN FIELDS: And that's the news.
[MUSIC PLAYING]
ABDEL SGHIOUAR: Today, we are talking to William Morgan. William is the CEO of Buoyant, the company behind Linkerd. You worked at Twitter before as a software engineer and also an engineering manager. And you have a very long experience in the field. I'm super excited to talk to you today. So welcome to the show, William.
WILLIAM MORGAN: Thank you, Abdel. Great to be here. And thank you for having me.
ABDEL SGHIOUAR: I'm glad we managed to finally get to do this.
WILLIAM MORGAN: Yeah.
ABDEL SGHIOUAR: Last time Linkerd was on the show was 2022. I actually checked it out. So I guess in five years, there have been a lot of new things, I hope. But as usual, we try to start our episodes with the assumption that no one knows what we are going to be talking about. So my first question to you is, what is Linkerd? Can you give us a basic introduction?
WILLIAM MORGAN: It's an open-source project, and it's a service mesh. And in the early days of Linkerd, I often had to spend a lot of time explaining why service mesh was like a thing that you had to care about. Nowadays, that term at least has kind of become a little more common. But the idea is that you add Linkerd to a Kubernetes cluster, or a set of clusters, and it kind of provides this application layer of networking.
Where normally, we think of networking as, How do I get a TCP packet from point A to point B? a service mesh like Linkerd is going to be focused more on, how do I get HTTP requests to go safely from one pod to another? Can I do retries? Can I do timeouts? Can I do request-level load balancing? Can I send it across clusters? Can I secure both sides of the connection with mutual TLS so I can encrypt things and keep it and authenticate them? So kind of this higher order level of features.
So that's what Linkerd does. And the goal for us is you should be able to add Linkerd to an existing Kubernetes application and not break the application and not have to change any of your configuration. You should be able to just start getting new and powerful capabilities.
ABDEL SGHIOUAR: Awesome. So yeah, the way I always describe service mesh is it's adding intelligence to the network layer, essentially, in a way, without having to change the application, as you said. So you don't really need to modify your application to be able to do MTLS, end-to-end encryption, or authentication, or stuff like that.
So as somebody who has been dabbling mostly in the Istio space for a while, I just read the documentation to prepare for the episode. And so you correct me if I'm wrong. So Linkerd follows a very standard kind of architecture-- control plane, data plane kind of architecture-- with sidecar proxies, right?
WILLIAM MORGAN: Yes.
ABDEL SGHIOUAR: So what makes it different from other service meshes out there?
WILLIAM MORGAN: Yeah, I think the biggest difference kind of starts with a philosophical difference, which is our angle for Linkerd has always been around maximizing simplicity, and especially operational simplicity. So our model of our user is, OK, here's some poor soul who's been tasked with adopting Kubernetes and learning everything about Kubernetes and the kind of rich set of knowledge you have to imbibe in order to be able to operate Kubernetes successfully.
And now they're adding Linkerd into that mix. Can we minimize what they really have to know? Can we stick as closely as possible to the way that Kubernetes already works? Can we make it so that Linkerd gives you features out of the box by default without having to add config as much as possible? And then when you do have to add config, can we make sure that that config scales sublinearly with the value you're getting back from it?
So that ethos has kind of informed every design choice that we've made in Linkerd. And I think LinkedIn's reputation showcases that philosophy. Like, when people tell us that they love it, it's because-- they're like, OK, it's because it's so simple. It's because I didn't have to do a bunch of complicated things to make it work.
And maybe more importantly, it's because at 3 AM, when I wake up, and the alarm bells are ringing because I'm an SRE, and I'm on call or whatever, I can understand what Linkerd is doing. I can build a mental model of how it works. And then I can diagnose and debug it really quickly.
And we can talk about how that philosophy kind of turned into specific implementation choices and things like that. And when we talk about the proxy and stuff like that, we'll get into some of those details. But really, that philosophical difference, I think, is the most salient when comparing to other projects.
ABDEL SGHIOUAR: Yeah. So we're going to get into the details. I just wanted to mention very quickly-- when you said, if you are a poor engineer that is tasked with learning Kubernetes, I think that we-- like, a lot of people assume that everybody knows Kubernetes very well. And you just have to jump into Reddit to see the horrifying fact of how much people are still struggling, right?
It's quite interesting for me. I was talking to Kaslin the other day. And I saw a post on Reddit recently about somebody saying, we have apps deployed on Kubernetes, but we have to restart the pods every week because they treat them as VMs, essentially. They're like pods, but they-- like, it's--
WILLIAM MORGAN: Yeah. And it's funny because Kubernetes has been around for 10 years now. And so we finally-- you know, there was always a joke of, oh, this job posting is like asking for 10 years of Kubernetes experience. Well, now we actually--
ABDEL SGHIOUAR: Have 10 years.
WILLIAM MORGAN: There are people, right? And we've been doing-- I've been involved with Kubernetes, or at least Kubernetes adjacent, since 2016, so almost--
ABDEL SGHIOUAR: 10 years, yeah.
WILLIAM MORGAN: --10 years. But what strikes me is every time I go to KubeCon-- and I've gone to almost every North American and European KubeCon since the early days-- there's new people in that audience all the time.
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: So I find myself having to explain what Linkerd is, what a service mesh is. I'm like, I've been doing this-- I've been explaining the same thing for nine years now, but there's still new people coming to this audience. So I think it's a great sign. But you're right. This is still new for a lot of people. It's still new.
ABDEL SGHIOUAR: Yeah, yeah. And so let's dive a little bit into it. Like, looking at the architecture, it does have, from what I could see, mostly three components in the control plane, right-- destination, identity, proxy injector. Are these like individual pods, typically?
WILLIAM MORGAN: It's a little messier than that. The destination component actually has-- yeah, those are three pods. The destination component has three containers in it. And the other ones all have one container. And we've gone back and forth over time. And probably, in the future, we might change some of those details. Well, there's like a balance that you're always striking between-- you want to have a simple deployment. You don't want to have 100 things there.
But you also-- each of these control plane components kind of has different scaling requirements and different resource consumption requirements as the data plane scales. So you don't also necessarily want to lump them all into one. Like any engineering decision, it's really a set of trade-offs and where you want to balance yourself across a set of trade-offs. So today, yes, there's three basic pods. And then there's a set of containers inside them.
ABDEL SGHIOUAR: My follow-up question to that was going to be, do you realistically, in production, see these requiring different scaling capabilities or different scaling requirements, like with high-scale deployments?
WILLIAM MORGAN: Oh yeah. No, definitely. Yeah, like the proxy injector, for example, that's a mutating admission controller thing. The only time it gets called is when a pod gets created. And so if you suddenly create a whole crapton of pods, or if you have lots of pod churn, then that thing is really heavily used. If not, then, it requires a minimal resource. It's basically doing nothing.
Whereas the destination component, a lot of what that is serving is service discovery lookups, kind of like dynamic configuration. So that tends to be the heaviest thing to scale. And it scales kind of roughly along the lines of how many-- what's the size of your data plane?
It also depends on what the data plane is doing and how much traffic it's taking and how many new pods are being created. And so that turns into, how many lookups do we have to do? So they do have pretty different scaling considerations based on the nature of your application.
ABDEL SGHIOUAR: I see. Yeah, and I guess you probably might have guessed it. My question is always related to that infinite debate about microservices versus monolith. And other service mesh tools have gone between the two, right? So that's why I was--
WILLIAM MORGAN: Yeah.
ABDEL SGHIOUAR: --I was asking--
WILLIAM MORGAN: Yeah, and I don't have a strong-- I think earlier in my career, I would have had this kind of strong philosophical viewpoint. And at this point, I tend to be pretty pragmatic about it. I think there's lots of value-- like, Istio, kind of famously, went to the monolithic control plane. I think there's value to that.
Is there enough to push Linkerd over the edge to do that? Maybe. Every once in a while, we kind of consider it and try and outline the pros and cons. But it's just a collection of trade-offs at this point.
ABDEL SGHIOUAR: Yeah. Yeah, cool. And so coming down to that, then, if we go down to the data plane, I would have to-- and excuse me for the term. You guys are a little bit old school. You are still doing proxies.
WILLIAM MORGAN: Yeah. Well, this is a more--
ABDEL SGHIOUAR: All the new kids are not doing proxies anymore.
WILLIAM MORGAN: Well, no, no, no. Abdel, they're still doing proxies. Everything ends up being a proxy. It's just, where do you put that proxy? And then, how much do you talk about it?
ABDEL SGHIOUAR: Let me correct my statement. The cool kids are not doing sidecars anymore.
WILLIAM MORGAN: Well, I wouldn't even say they're the cool kids. I think we're the cool kids. And they're the fuddy duddies because the first version of Linkerd didn't use sidecars either. The first version of Linkerd used per-node proxies.
ABDEL SGHIOUAR: A shared proxy.
WILLIAM MORGAN: Now, this is like in the ancient days. This is like 2016, I think. I'm not saying this is a good-- we moved away from the model for a reason. But part of the reason was because we had these very heavyweight proxies at that point. And they were actually running on the JVM. They were written in Scala. We had built them on top of a bunch of Twitter open-source libraries that we had used. And the JVM, it's not very cool these days, but it is very good at scaling up. It's not very good at scaling down.
And so we were like, well, these proxies are like 150 megs. There's no way that we can really use them as a sidecar. So we'll just put them on a per-node basis, where you kind of can amortize that resource usage. And it worked pretty well. And that was the first version that went to production, kind of famously at companies like Monzo.
But then what we learned-- and the reason we moved to sidecars later was it was just really annoying operationally because you'd have this one proxy would be responsible for whatever random selection of pods had been allocated to that node. And then if that proxy died, or if you were rebooting it because you were upgrading it or whatever, then some random section of your application--
ABDEL SGHIOUAR: Is not working.
WILLIAM MORGAN: --of your applications would be impacted. And so kind of the operational blast radius was really random. And the security blast radius was not great. You have all the TLS certificates sitting in memory in this one thing. And you're like, well, this kind of defeats-- we did all this work to get containerization and isolation.
And now we've kind of undone it. We took all our TLS certificates, and we're just mixing them in memory in this one process. So for a variety of reasons, we moved to sidecars. So yes, we're the cool kids. They're the fuddy duddies that are going back to the olden ways.
ABDEL SGHIOUAR: All right. So I just wanted to see how much I can trigger you with that. But so let's talk about the proxy. So it's called Linkerd2. I assume that's because there was a Linkerd1.
WILLIAM MORGAN: Yeah. Yeah, that's right.
ABDEL SGHIOUAR: And it's written in Rust?
WILLIAM MORGAN: Yes. Yeah.
ABDEL SGHIOUAR: And you call it a microproxy. So it's just a sidecar that does basically what most sidecars in most other service meshes does, right?
WILLIAM MORGAN: Yeah. Yeah, that's right. That's right. So what we did, we had that kind of 1.0 architecture based on the JVM and per-node proxies. And at some point, we were like, this just isn't really serving our users' needs. And so we wrote Linkerd 2.0.
And by the way, you should never rewrite your project from scratch, right? This is like the classic second-system syndrome. Like, every software engineering textbook is like, this is a bad idea. So we did the bad idea. And that's what came out. Maybe we got lucky.
But yeah, and so we said, if we were to do this the right way, what do we think is the most efficient, most effective, especially from the operational and security perspectives? And it ended up with us writing what we're now calling a microproxy-- I don't think we used that term at the time-- written in Rust that was designed to run as a sidecar.
And the reason why we ended up calling it microproxy is because it's really distinct from something like Envoy or Nginx, which are these kind of general-purpose proxies, right? They're like these Swiss Army tools. You can use it for anything. You can front your cluster with it. You can put it against the open internet. Or you can treat it as a sidecar or do whatever.
For us, we were like, we're going to trim it down to the bare minimum. Like, what is just the features that you need when you're running in a cluster, and you're next to a pod? And we're going to do it in Rust. And this is like 2018. So the choice to use Rust was pretty scary at that point. The language had just solidified. Actually, I don't-- the first version of this might-- Rust might have been pre 1.0 or something. And the--
ABDEL SGHIOUAR: Yeah, it was quite new at the time.
WILLIAM MORGAN: Yes the ecosystem was--
ABDEL SGHIOUAR: Yeah. You know, a funny story that I had the opportunity to interview Matt Klein, the person who wrote Envoy. And in the interview-- it's actually in the show-- Matt Klein said if he got the opportunity to rewrite Envoy, he would we rewrite it in Rust because Envoy is written in C++.
WILLIAM MORGAN: Yeah. Well, I think in the world of-- what year is it, 2025-- that was a clear win. But at the time, it was a real gamble. And we had to spend the next couple of years-- again, this is a terrible idea. You should never do this. But we had to spend the next couple of years kind of funding some of the basic networking ecosystem.
There's libraries-- if you're a Rust person, there's libraries like Tokyo and Hyper and H2 that are the foundation of any HTTP proxy that just weren't there or were in the most rudimentary forms. And so we had to fund that basic development just to get this proxy to the point. Now, 2025, we're like, oh, it was a great idea. Awesome. But gosh, it was a big, big gamble at the time.
ABDEL SGHIOUAR: Yeah, I'm just going to go on a whim and assume you hired a bunch of junior engineers that just had this idea of, just rewrite everything, right, because that's every junior engineer dream.
WILLIAM MORGAN: It's even worse. It was our most senior engineer.
ABDEL SGHIOUAR: Oh, wow. OK.
WILLIAM MORGAN: Junior engineer, you could be like, well, you know-- you can argue. Senior engineering, you're like, well--
ABDEL SGHIOUAR: Sure, sure. All right, so then-- OK, back to the point of Envoy. So this was the first time I found out what Linkerd is and your name, because I read the article that is like why Linkerd doesn't use Envoy. And this is back from 2020, right?
WILLIAM MORGAN: Yeah.
ABDEL SGHIOUAR: So you touched on it a bit. So Envoy's general-purpose proxy. And Linkerd2 proxy is a micro-specific proxy. Can you talk a little bit more about why not just use something that exists that might have been easier?
WILLIAM MORGAN: Yeah, I felt kind of uncomfortable writing that article, too. This was back in 2020, like you point out. But there were so many people who were saying, well, why don't you use Envoy? Envoy is the de facto standard for service meshes. Why are you doing this? And I was like, you know, Envoy's a great project, and I don't want to like shit on Envoy. Do I have to really write a blog post where I'm like, and here's why Envoy-- so I tried to make it as friendly towards Envoy as possible.
But I think for us, the core insight was the data plane is kind of the heartbeat of-- it's the most important part of the service mesh because that's the part that your application traffic is going across. And even back then, we had users and adopters who were sending financial transactions across, or who were sending medical data across, or who were sending incredibly sensitive information through the proxy.
And it didn't feel right to us, as kind of security-minded people, to use C++ for something like that. No human being is able to write C++ in a safe manner. It just felt like we had to build something that was really going to be future proof. And so that led us to Rust. If you're not a Rust aficionado, the biggest difference is that the language and the compiler have a whole set of checks and technologies and whatever-- the borrow checker, kind of famously-- that prevents you from misusing memory in unsafe ways.
And so that ends up meaning your Rust program's going to run just as quickly as C and C++. And it's close to the metal. But you avoid this whole class security vulnerabilities that are unfortunately endemic to C and C++. So even back in 2018, when we were making this call, we were like, it just didn't feel right to use something that was going to be unsafe by default.
ABDEL SGHIOUAR: This is-- like, I've heard all sorts of arguments against C and C++. And for Rust, this is the first time I hear a safety or security-focused focused argument, right?
WILLIAM MORGAN: Oh, really?
ABDEL SGHIOUAR: Yeah, it's the first time I hear somebody say it. Like, it makes so much sense when you talk about it. But I've never heard somebody say, it doesn't feel right to write something that could potentially be unsafe that would handle sensitive data, right, which you're right, which is funny because a lot of other open-source tools that the internet relies on are actually written in C++. And they handle, arguably, a lot of sensitive data.
WILLIAM MORGAN: Yeah. Yeah, no. And there are ways of making-- those programs become secure over time, hopefully, through discovery, through humans spending a lot of time and energy on it. It just didn't-- that's not the future of software. Even in 2018, we were like, that can't be the future of software.
ABDEL SGHIOUAR: Yeah, sure. And so I'm going to bring you back a little bit before 2018. Linkerd has been around since 2016. So that's almost as old as Kubernetes itself. What have you seen change in the way people use it or adopt it over the years?
WILLIAM MORGAN: Gosh, there's been so much evolution in the space. There's the obvious stuff, where it's like, OK, well, we had to justify why Linkerd existed for a long time. And now we kind of have more interesting conversations than just that.
We had to do a lot of work in the early days to distinguish the different types of proxies because everyone at that point was very familiar with proxies, right? Nginx was around. Apache was there. You put your proxy in between your application and the internet. And that's a proxy.
And we were saying, well, actually, you need 10,000 proxies. And you should put one in every pod. And people are like, oh, that's crazy. Like, I can't imagine running 10,000 Apaches in my cluster. They all require so much tuning and feeding. And we're like, well, no, no. this is a different thing, and you're not going to have to become a proxy tuning expert.
But it just took a long time, I think, for people to come to terms with the fact that, because Kubernetes was like this building block that we could build on top of, a lot of the stuff that seemed kind of crazy to say at that point-- hey, we want you to deploy 10,000 proxies-- actually became pretty practical. Like, that was one command.
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: Right? Like, we had tools like the proxy injector, like mutating webhook and admission controllers and things like that, where you actually could kind of do this. We had containers. So part of it was just upleveling people's expectations of what was reasonable for a piece of software to do.
One thing that's been really interesting for me, I think, has been watching how Kubernetes adoption patterns have changed. Everyone who's using Linkerd is using it with Kubernetes. So we kind of like get exposed to how they're doing it.
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: Multicluster is a really interesting one for me. We added the first version of multicluster in-- I don't know, 2019, 2020. It was pretty early into the project. And it was basically like, hey, I've got these two clusters. And I need them to talk to each other. So how do I do that safely and securely and without the application having to know about the cluster topology? So we built these features to do that, which was really cool.
But the use case then was kind of ad hoc. It was like, oh, I had this one cluster from this one team. And this other team came up with their Kubernetes cluster. And now we have to kind of piece them together.
What we're seeing now is a very different type of usage, where people are-- out of the box, they're like, OK, we're going to deploy 200 clusters, and we need them all connected to each other-- or 1,000 clusters. You're like, OK, well, now there's a set of multicluster features that actually are very different. It's like this planned adoption versus ad hoc adoption. So you just see things like that in the way that people are adopting Kubernetes that end up influencing the Linkerd roadmap.
Like, we added this feature recently called federated services-- this was in 2017, which is the latest release-- where you can basically treat-- if you have the same service deployed across 200 clusters, you can treat it as one logical service. And you can have your-- you're talking to that service. Well, we'll just load balance across all of those endpoints. And if a cluster's down, or if one pod is down, or if five clusters are down, or if there's a network, it doesn't matter. We'll just load balance across it. And that would have been crazy five years ago. But now it's a pretty common use case.
ABDEL SGHIOUAR: As somebody who has been doing service mesh implementations, doing multicluster have been ad hoc, as you said, but now it's kind of becoming the norm, more or less, either for-- mostly for high availability, right-- so people want to reduce the blast radius-- or just for-- you know, you deploy clusters across multiple regions in a cloud environment so you can get access to resources that you cannot get access to one particular region or latency or whatever, right? So those multicluster use cases are kind of becoming part of the discussion now. You're not going to look crazy if you say, I'm doing multicluster now.
WILLIAM MORGAN: No, and kind of related to that, it seemed like in the early days, there were a lot of-- there was a big focus on these large, multitenant clusters. And so we tried, in the very first early days of Linkerd, to make it so you could install Linkerd in just one part of the cluster or one namespace. And if you didn't have cluster admin privileges, maybe you could just install it in one namespace and have it as one tenant has a mesh.
And that ran into friction pretty quickly. But the pressure to do that has also vanished. I think people are-- that was kind of like the mesos approach to the world, which is where I cut my teeth at Twitter on distributed systems, was like one giant cluster, multitenant. Now people are gravitating towards lots and lots of--
ABDEL SGHIOUAR: Small clusters?
WILLIAM MORGAN: --Kubernetes clusters that are smaller that are often single tenant.
ABDEL SGHIOUAR: Yeah. And Linkerd-- just from reading the documentation, Linkerd sounds like the philosophy is also following this minimalistic approach,m right? Like, out of the box, you get the minimal-- and by minimal, I don't really mean minimal in a bad way, but the minimal amount of features available out of the box. And then if you need stuff, you have to install them later, right? Is that like a fair way to describe it?
WILLIAM MORGAN: Yeah, like reduce the config burden as much as possible. I didn't give you as much.
ABDEL SGHIOUAR: Yeah, and so that comes through plugin and comes through a bunch of other ways. And so speaking of this topic of minimum, because I wrote-- so this is-- I'm going to churn my own horn a little bit here. I wrote an article a while ago, which was why you shouldn't-- or I did a talk and an article called, You Probably Don't Need a Service Mesh.
WILLIAM MORGAN: Oh no!
ABDEL SGHIOUAR: And it was basically--
WILLIAM MORGAN: How could you say that? My life's work here.
ABDEL SGHIOUAR: I am happy you don't know about it because some other people found it, and they were not very happy. Like, my whole point about that content was to try to tell people, you shouldn't adopt a service mesh without understanding what you are doing, right? It was an argument about blind adoption.
And my typical joke when I do this particular talk, when I did it in the past, was to tell people, I want to tell you why you shouldn't adopt a service mesh just because your security person read an article about MTLS and came to you and said, we need MTLS, therefore we need a service mesh. Do you see my point? That's like blind adoption of technology without understanding what it does, right?
WILLIAM MORGAN: Yeah, yeah.
ABDEL SGHIOUAR: So then this long introduction to go to my question, where do you see a service mesh today in the cloud native architecture in general? Do we need it? Do we require it? Is it optional? Where do you see that debate settling, if it will settle?
WILLIAM MORGAN: Yeah. Well, first, I think you're absolutely right. Blind adoption, by definition, is not what you want to do. And it's funny, because I think that-- I love the cloud native community. It's a community of people who love learning and who love new stuff and who love trying stuff.
And I think that's largely a positive thing. I think it does have a consequence, where there is a lot of kind of excited adoption of something that maybe you don't really need because it sounds cool, and it feels cool. And we certainly felt that in the early days of Linkerd. Linkerd is no longer cool, so we see a lot less of that now. It's like they've all gone off to eBPF or whatever, Wasm, or whatever the new cool thing is. But I think that's almost a part of this community. It's just like this excited adoption of something that maybe you don't actually need.
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: Is service mesh required? Yeah, this is a little self-serving, but I think we're getting to the point where it kind of is for nontrivial deployments. Obviously, if you have a cluster that's like your home cluster, or you just have one cluster, and you don't have requirements around encryption of data in transit or anything like that, you don't need it.
But if we're in the case of, oh, I've got 200 clusters, and they're going to have to communicate with each other, you kind of need a plan for that. You need a plan for providing that platform in a way that the developers are not exposed to the cluster topology, and they're not hardwiring in these things, like cluster names or whatever, into the code.
So one way or another, you need to solve that. And anything that the service mesh-- like any software, anything that's a service mesh can provide, you could solve in another way, right? Like, you could solve in the application layer. Even TLS, have your applications do it. It's horrible. But you could do it.
But I think the reason why Kubernetes is so powerful-- one of the reasons why it's so powerful is because it fits into this model where we're building a platform for our developers, right? Like, we're kind of the service organization. We're the platform team. The devs are our customers.
And in that world, you want to provide them with a bunch of capabilities without requiring them to write a whole bunch of platform-specific code. And I think Linkerd fits right into that, whether it's multicluster, whether it's TLS, or whether it's something else. That's part of why our focus has been so much on, how do we provide these features to you without you having to change code?
Because changing code, it's not even you who's changing code. It's not the platform owner who's changing code. You'd have to ask the developers to change the code. That's like the hardest conversation to have. So yeah, there's simple cases that don't require it. But I think as Kubernetes adoption evolves, it starts becoming pretty important.
ABDEL SGHIOUAR: Yeah. It's interesting because my experience five years ago have always been the topic-- you mentioned the topic of encryption in transit. That conversation was something I would have with regulated environments, regulated industries. So you talk to banks or health care.
Now it's, in my experience, actually becoming something that you are-- it's a conversation you're having across the board. It doesn't really matter. It's just sometimes-- it's like we're running cloud, but we don't trust it, right? Like, we just want to encrypt our data. We don't--
WILLIAM MORGAN: Yeah.
ABDEL SGHIOUAR: Like, we don't-- it's as simple as that.
WILLIAM MORGAN: Well, you don't own the network. Your competitors could be running on the same network. You've got no idea.
ABDEL SGHIOUAR: Exactly.
WILLIAM MORGAN: So in the move from on-prems to cloud, you've lost all these guarantees you used to have, right? On on-prems, you had the physical wires. And you had the racks with the locks. And you're like, all right, this data, it's not going anywhere.
ABDEL SGHIOUAR: It's mine.
WILLIAM MORGAN: It's staying here. And now you're like, you got no idea. So you have to recover some of those things that you lost from hardware. You have to recover them in software.
ABDEL SGHIOUAR: Yeah, yeah. I just have one more question for you. There was actually a panel discussion that you were a part of. And I think that this is not-- like, the topic of the panel discussion is probably, for me, have been the defining topic of open source in 2024. And there have been more discussions going on through 2024. I don't want to go into those discussions now. I just want to get-- the panel discussion was about balancing-- and I'm going to introduce it and then let you correct me.
WILLIAM MORGAN: Sure.
ABDEL SGHIOUAR: Balancing open source and business, kind of like, how do you-- like, how do you find that right line between being a maintainer of a project, which obviously means hiring people and paying them, and making money out of an open-source project, right? What's your take? Like, what's your thought on that?
WILLIAM MORGAN: Yeah, gosh, how much time do we have? It's like my life's work here.
ABDEL SGHIOUAR: Of course.
WILLIAM MORGAN: And Linkerd kind of has a unique position here. Part of the reason-- so I was on two very similar panels, one from kind of the founder-CEO perspective and one from the maintainer perspective, because I kind of play both of those roles. The reason why those panels were there was because there's been a big change over the past 18 to 24 months in the world of open source.
And I should say the world of commercial open source, because open source is a huge spectrum. And what's true of the Linux kernel is very different from what's true of Python, which is very different from what's true of Linkerd, which is very different from what's true of other projects. So we're talking specifically around what I would call commercial open source, which is really what is in the CNCF ecosystem. It's like--
ABDEL SGHIOUAR: Yeah. Yeah, pretty much, yeah.
WILLIAM MORGAN: --with one or two exceptions, pretty much every project in this ecosystem is built by-- there's a company behind it that's paying the maintainer. Sometimes there's multiple companies. Although really, if you look, the majority of the time, there's really one company, where if that company goes away--
ABDEL SGHIOUAR: One paying most of them?
WILLIAM MORGAN: --that project is failing.
ABDEL SGHIOUAR: Yeah, yeah.
WILLIAM MORGAN: And there's this big change. So OK, we're in that ecosystem. And there's a big change that's happened, where two years ago, we were in what we call the ZIRP world, right, zero interest rate policy. Money was free. VC money was flowing. Everyone's like grow, grow, grow, grow, grow. Hey, you're a startup that's making an open-source project? I want to see your adoption numbers. And if you get bigger adoption numbers, then you'll get more funding. And that was everyone's focus.
And then the economy changed. It contracted. VC money dried up. And suddenly, a lot of projects and a lot of companies behind them were left in a situation where they don't have a functioning company. And they have this open-source project with a lot of adopters. And they're like, I don't know what to do here. So that's scary. That's really scary from the CNCF ecosystem perspective.
And we saw signs of that. We saw Weaveworks shutting down. That was crazy, right, the creators of GitOps. That was kind of a pillar of the community. Gone. What happens to Flux? Well, for a while, we didn't know. And then luckily, Flux found a home. But Flux could have disappeared easily. We saw projects like Redis--
ABDEL SGHIOUAR: Change license.
WILLIAM MORGAN: --and others change licenses. Like, suddenly, all this stuff is going down. And I think if you're an adopter of one of these projects, you're in the ecosystem, and you're relying on Kubernetes and Linkerd and Flux and some of the other projects, you're probably being like, holy crap, what's going to happen? Because I built my company on these projects. And now, suddenly, a lot of them are having trouble.
So I was on this panel. It's kind of like we explore these issues. There's one approach, which is to say, you know what? We don't want those projects. Forget those corporate OSS projects.
We only care about-- let's have projects that are true, multivendor, like Kubernetes, and arguably like Envoy. That's the only type of project that should be in the CNCF. And all the other projects should just die. That's one approach. Obviously, I don't like that one because I don't want it to die.
And the other approach, which is the one that I was advocating for in these panels, is, hey, let's be really honest and upfront about what these things are doing. Like, let's not pretend anymore that, hey, this is all one community of altruistic people coming together in harmony.
Like, let's be upfront about the fact that pretty much everyone in these conferences or using these projects is doing it for money, right? You're pulling this project in because your company is trying to build a business on top of it, or you're working for a vendor as an open-source maintainer because you're trying to build your business.
Everyone is in here for money. So let's be upfront about it. And let's not make it this shameful thing, where, oh, the vendors, they're off in the hall over there. We keep them over there. And you only go in there if you want socks. And the rest of the time, we're talking about this pure open-source community. That's not really what's happening here.
And the reason I was on those panels is because Linkerd went through this process in February of-- so almost a year ago. February of last year, we made a change that kind of profoundly improved our ability to provide Linkerd to users. But it was a controversial one at the time. So I was there, talking about how that worked.
ABDEL SGHIOUAR: Yeah. It's definitely an interesting thing to notice from-- I mean, I work for a vendor. But I'm obviously not very involved in the commercial side of things.
WILLIAM MORGAN: Don't be ashamed. No one here works for a vendor.
ABDEL SGHIOUAR: Yes. No, what I mean is that I don't have a skin in the game in terms of-- well, I do, in the sense that if Kubernetes is not making money, I don't have a job.
WILLIAM MORGAN: Yeah, yeah, exactly.
ABDEL SGHIOUAR: But the point here is that I don't have the decision-- the magic wand that can allow me to wave it and make a decision. But there was one thing that I saw that you wrote. I don't remember where exactly. It resonated with me so much because I am coming from the open-source world of that perfect world. Like, we are all friends, and everybody is happy, and we're all just kind of happy friends with each other.
And then there was that change that was happening in February, which was like, doing open source does not necessarily mean providing everything out of the box working, right? Like, we're all spoiled in open source, basically. Everybody just assumes you can have access to everything available all the time. And it was kind of like a wake-up call. And I find it interesting, right? Like, I find interesting discussion that's been happening. And I think that that's going to carry on through 2025 and beyond, right?
WILLIAM MORGAN: Yeah, I hope so. I mean, I certainly had a lot of maintainers of other projects reach out to me when we made that announcement and say, hey, let me know how that works because we're in a similar situation, and we have to figure out what to do. And so I was very happy. In seven or eight months after we made that change, I was able to go out and write a blog post, where I was like, hey, the change-- the thing we did.
And what we did was we didn't have to change a license, because I was really trying to avoid that. We just changed the way that we were providing stable release artifacts, while we basically said we're not going to provide open source, stable release artifacts. We're going to provide the weekly release artifacts. And if you want something that's called stable that has semantic versioning guarantees, then you need to figure out how to get your company to pay for it.
And we did it in a nice way. If your company's fewer than 50 employees, just use it for free. That's fine. But if you're a big company, and you're building a business on top of Linkerd, we said, basically-- and you're treating it as like a consumer, then you got to figure out a way to pay--
ABDEL SGHIOUAR: To pay for it.
WILLIAM MORGAN: To pay for it, yeah. And you got to find a way to fund this project because these maintainers are paid maintainers. And I think you want that.
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: As a Linkerd adopter, you want this project to be around. You want the maintainers to be swimming in money and just producing features without a fear in the world. Like, that's good for you, and it's good for them.
But yeah, in October, I was able to write a blog post. I was like, hey, it worked. The company behind Linkerd, Buoyant, is profitable. We're adding more maintainers to the project. We've kind of found this way of making Linkerd a truly long-term project. We've been around for almost 10 years. Why can't we be around for another 90 years? Like, what do we have-- you know?
ABDEL SGHIOUAR: Yeah.
WILLIAM MORGAN: And I was very fortunate to be able to write that. And I feel very happy about that because not every project, I think, is going to be able to make that transition. But I wanted to do it in a really upfront and honest way and say, look, there is a company behind Linkerd. We have to make money for Linkerd to survive. So this is how we're going to do that.
ABDEL SGHIOUAR: Yeah, open source costs money. That's something that people don't realize. Well, William, that was a fantastic conversation. I certainly learned a lot. So thank you for your time.
WILLIAM MORGAN: Oh, absolutely. Thank you. And I learned a lot, too.
ABDEL SGHIOUAR: And yeah, I hope we'll be able to talk again in five years from now when Linkerd announces Linkerd proxy.
WILLIAM MORGAN: Yeah. Oh boy, I hope not. Let's talk sooner than that, but without hitting the 3.0 milestone.
ABDEL SGHIOUAR: All right, sounds good. Thank you very much, William.
WILLIAM MORGAN: All right. Thanks, Abdel. Great talking to you.
[MUSIC PLAYING]
KASLIN FIELDS: Thank you very much, Abdel, for that interview with William Morgan of Buoyant, talking about Linkerd. This is one that we've been talking about for a while. Linkerd is a significant project in the CNCF ecosystem. And so it's been doing some really interesting, technical things. And I loved the conversation about open source and business. So thank you for doing that, Abdel.
ABDEL SGHIOUAR: Yeah, it technically graduated to the CNCF before Istio. So it has been around for a very long time as a service mesh tool, right?
KASLIN FIELDS: I mean, Istio took a while to become part of the CNCF. And then once it did, I think it was already graduated when it joined, or was it incubating?
ABDEL SGHIOUAR: It jumped some steps.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: Yeah. It took like a fast track.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: But yeah, Linkerd, it was on our mind for a while. And I've been wanting to talk to somebody about it. Last time it was on the show was 2020. So it's been a while. So it's good to catch up and kind of figure out what's new.
KASLIN FIELDS: Especially now that it's graduated.
ABDEL SGHIOUAR: Yeah.
KASLIN FIELDS: And I think early in the conversation, you all were talking about where service meshes fall in today's world. I remember some meetups that I spoke at when I was trying to explain the concept of service meshes early on. And I was trying to come up with an analogy for it, like a net and the mesh and networking.
And I ended up describing service meshes as the things that Kubernetes doesn't do that you're like-- you start using it, and you're like, oh, I really need those. Service mesh just provides a bunch of those.
ABDEL SGHIOUAR: That's one way of looking at it, certainly. Yeah. It's basically a lot of features that you don't have in native Kubernetes that if you want--
KASLIN FIELDS: Just don't make sense in native Kubernetes, really.
ABDEL SGHIOUAR: Which you're right, yeah.
KASLIN FIELDS: Yeah. They're application focused.
ABDEL SGHIOUAR: Correct, or they are sometimes platform specific, right?
KASLIN FIELDS: Yeah, therefore meshing your services together.
ABDEL SGHIOUAR: I mean, before we go into the mesh discussion, a very simple example would be encryption, like end-to-end encryption. Like, there are multiple ways you could do end-to-end encryption between pods.
KASLIN FIELDS: I have let myself get stuck in the trap, though, of letting myself describe service meshes through that. And it's like, that's such a clear example that it's easy to be like, this is a thing that service meshes do, and then not explain any of the other things service meshes do.
ABDEL SGHIOUAR: I agree with you, except that the thing with this particular example is that on the surface, encryption between pods is an easy thing to understand as a use case, right? But the implementation could vary depending on how you want to do it, right?
KASLIN FIELDS: That's true.
ABDEL SGHIOUAR: And there are multiple ways of implementing it. So Kubernetes trying to be this platform-agnostic tool, it would be very hard to say, oh, we're going to do it this way, because then it would not be platform agnostic. It would be more opinionated, right?
So although technically, all service meshes kind of use MTLS, and that's kind of an interesting way of-- like, that's an interesting thing, where all of them converge toward a single way of doing encryption, because there are certainly other ways. Like, you could use WireGuard. You could use VPN. You could use other tools. But it's just an example, right?
KASLIN FIELDS: Yeah. With the prominence of MTLS, I hadn't even thought about the point that you could do other forms of encryption. So even if you were just looking for a service mesh for that one aspect of meshing your services together, there is variety that is caused by the distributed system that is Kubernetes. And so service meshes could be useful in that way. But yeah, people mostly just use MTLS.
ABDEL SGHIOUAR: Yeah, that's kind of my pet peeve with-- well, it used to be my pet peeve a few years ago when I wrote my article. You probably don't need a service mesh, which was-- that's coming out of that couple of years of working with people that just use it for the MTLS parts, which most of the time I was like, yeah, but there are other things you can do with this thing, right? So yeah.
KASLIN FIELDS: Invitation to those listening out there to interact with us, tell us how wrong Abdel is with his talk, you don't need a service mesh.
ABDEL SGHIOUAR: Yeah, so it's not you don't need a service mesh. It's you probably don't need a service mesh. There it is a clear separation there, right? I stole the title from [? Mophie, ?] but--
KASLIN FIELDS: That is very [? Mophie. ?]
ABDEL SGHIOUAR: --I was invited to a bunch of places with that talk. So I think talks like this have a sort of clickbaity feel to them.
KASLIN FIELDS: Yeah, sometimes the clickbaity titles work out. You can go too far with it, though.
ABDEL SGHIOUAR: Yeah, I mean, you have to have some substance behind what you're going to talk about, right?
KASLIN FIELDS: But the point that you should not make blind decisions about what kind of tools you're using is always a valid one.
ABDEL SGHIOUAR: Correct.
KASLIN FIELDS: And so focusing that on service meshes, I think, is a particularly poignant one because, especially, as we were talking about, when service meshes first started up, nobody knew what it meant. And it's also pretty hard to understand what it is because it's so many things at the same time.
ABDEL SGHIOUAR: Yep. Pretty much, yeah.
KASLIN FIELDS: So it's a very valid thing to talk about.
ABDEL SGHIOUAR: So you know what? One thing that came to mind right now-- just flashbacks to the consulting time when I was doing service mesh implementations. Service mesh was very interesting as a concept, in the sense that in my interactions with different personas, no one really seemed to understand it. Like, I would work with people who are into cloud native and Kubernetes. And they wouldn't understand it because that's too low level for them.
And then you would expect, well, certainly, if I talk to somebody who's into network engineering, they would. And they don't because that's too high level, right? And so it was always this just floating thing that no one really knows what it does. It's just like, oh, it's like a layer. It's like, yeah, but where? Which layer? What are we talking about?
KASLIN FIELDS: It's super handy for certain things if you know what it's handy for and need those things. But you both have to know what you need and that it does that.
ABDEL SGHIOUAR: Pretty much, yes.
KASLIN FIELDS: These are two difficult things to figure out.
ABDEL SGHIOUAR: Yeah, and it's certainly coming from companies of the scale of Google, where they need something that works at high scale, hide the implementation details. I'm not saying that the idea came from Google, but the companies like Google, right?
And so obviously, this is one of those-- it's a technology that has been evolving to serve multiple things within one entity or multiple entities. And then, when it makes its way to the public, not everybody necessarily have those kind of problems. And yeah, it's just a much longer conversation, I guess.
KASLIN FIELDS: Technology.
ABDEL SGHIOUAR: Yes, yes.
KASLIN FIELDS: One of my favorite aspects of technical evolution that you all talked about in this conversation was Envoy.
ABDEL SGHIOUAR: Oh yes. Oh yeah.
KASLIN FIELDS: Let's talk about Envoy.
ABDEL SGHIOUAR: Yeah, I am a big fan of the Envoy project by itself.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: But I do understand Linkerd deciding not to go down the Envoy path because it's probably an overkill for what they were trying to achieve, at least when they were doing the implementation, right?
KASLIN FIELDS: Yeah. A whole separate project that is designed to be a proxy, it kind of gets into the space we were just talking about, where service mesh is flexible for all those different environments that you might be running Kubernetes in. And Envoy, as a standalone project, also has those considerations of it might be run in different environments and things. So it's got flexibility built into it that you might not strictly need for Linkerd. And so they did their own implementation of proxy.
ABDEL SGHIOUAR: Yes, correct. And also, Envoy technically came out before, I think. I'll have to double check that, but probably before most service mesh tools that exist on the market. So it was designed to solve completely different problems. And then it was adopted by Istio eventually, right, if we talk about Envoy in the context of Istio.
But yes, to your point, it does more than just what a service in the mesh needs it to do.
KASLIN FIELDS: Yeah. Makes sense, but really cool as a standalone project. Maybe we'll explore that someday.
ABDEL SGHIOUAR: Well, I mean, we did an episode with Matt Klein, the creator of Envoy, a few-- I think it was last year or the year before, talking about Envoy itself, like the creation, the evolution of Envoy over time. So that was the person who created it at Lyft. That's where it was created initially.
So one actually example I want to explore-- speaking of end users, because that's one of the things we want to do more of this year. There are actually companies using it without Istio. Like, there are companies just running self-managed Envoy.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: And that's something I want to talk about, right? I want to have somebody on the show to talk about.
KASLIN FIELDS: Yeah, as a user--
ABDEL SGHIOUAR: Yes.
KASLIN FIELDS: --who does that.
ABDEL SGHIOUAR: Yes.
KASLIN FIELDS: That would be very cool to hear more about. And one thing I really liked about the Envoy conversation, though, was that it also got into sidecars, which is one of my favorite areas in Kubernetes. Do we do sidecars or not? Particularly challenging conversation in the world of service meshes?
ABDEL SGHIOUAR: Particularly challenging because Kubernetes just started officially supporting them last year, right?
KASLIN FIELDS: And even then, they're not called that.
ABDEL SGHIOUAR: Even then, they're not called that, yeah. Yeah, I think in the context of-- we didn't really touch on it with William. But in the context of Linkerd, at least, the Linkerd2 proxy is a microproxy. So it's pretty lightweight, right?
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: So it doesn't really consume as much resources. And now that Kubernetes has "first-class support for sidecars," quote unquote, then I think it's a good choice, generally speaking. But it's just interesting to see all these service meshes kind of oscillating between sidecar S, and sidecar full, or sidecar less, whatever you want to call it, right?
KASLIN FIELDS: Everything you add on to a piece of technology has potential for risk.
ABDEL SGHIOUAR: Yes.
KASLIN FIELDS: Right? So sidecars is one of those areas that gets particular attention with how risky, is it to have sidecars on your applications?
ABDEL SGHIOUAR: Yeah, it's funny. There is a saying in my language, in Moroccan dialect, which loosely translated is something along the lines of, add water, add flour. So imagine you're making a dough. The more water you add, the more flour you need to add. And it's like a vicious cycle. Like, where does it end?
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: And so if you're Moroccan and listening to me, you know exactly what I'm talking about. And it's kind of like this with technology. Like, you add layers, you add problems, you add complexity.
KASLIN FIELDS: And that's why Kelsey Hightower's no-code is the way.
ABDEL SGHIOUAR: Oh yes, no-code is the future.
KASLIN FIELDS: Alternative code, though-- Rust.
ABDEL SGHIOUAR: Yeah. That was a bold choice.
KASLIN FIELDS: Yeah, bold choice to re-implement Linkerd into Linkerd2 using Rust. Sounds like it paid off for them pretty well. Rust is still pretty popular, I would say. And you reach a point, especially with the evolution of Kubernetes over the last 10 years, there have been things-- like, Ingress is one of my favorite examples, where we thought it was going to go this way. We thought this was what people needed. And it wasn't exactly what they needed. So you had to re-implement it.
And that's just how technology goes is you implement the thing. People want something totally different from what you made. But they're trying to use the thing that you made to do what they want. And then maybe you re-implement it at some point. I liked that you called out that it was probably a junior engineer that did that. And then it was a senior engineer who did that.
ABDEL SGHIOUAR: That was my assumption, right?
KASLIN FIELDS: Both can happen, different reasons.
ABDEL SGHIOUAR: Yeah. Yeah, I think it's quite interesting that they actually bet on Rust when it was still quite new and quite fresh and lacked a lot of standard tooling and standard libraries specifically. So it was quite interesting. I think my joke about junior engineers is basically junior engineers want to re-implement everything, right? So that was the joke.
KASLIN FIELDS: Yes, because you start learning something as a junior engineer, and you're like, why is this thing so terrible? We should make a better version of it. And then you become a senior engineer by trying to re-implement that thing and learning that they had reasons that they did that.
ABDEL SGHIOUAR: Right, that's the life cycle of a software engineer, I guess.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: Yeah.
KASLIN FIELDS: And one of my favorite parts of the conversation, as I already mentioned, was about open source and business.
ABDEL SGHIOUAR: Oh yeah.
KASLIN FIELDS: That is something that's come up a couple of times. We did that episode about OpenTofu and the awkward position of open source in business these days.
ABDEL SGHIOUAR: Yeah, and specifically that they have made some changes in the way they distributed artifacts last year, which a lot of people are not happy about. And I'm glad that I managed to get to discuss that with William because getting to understand his side of things is also interesting, right? Or I mean his company side of things.
KASLIN FIELDS: I liked the way he described it.
ABDEL SGHIOUAR: Yeah.
KASLIN FIELDS: Yeah.
ABDEL SGHIOUAR: Yeah, I think it was well said, in open source costs money. You need people to build and maintain this stuff, right?
KASLIN FIELDS: Constant conversation that's happening in open source among maintainers. It's like, we need more help. Who's going to help us?
ABDEL SGHIOUAR: Who's going to pay for it?
KASLIN FIELDS: Yeah, who's going to pay for that? Especially, that's one of my favorite parts of the long-term support conversation that's going on in open source because all the businesses offering Kubernetes want to offer long-term support because none of their customers want to upgrade, which is understandable because upgrading is pain and suffering. It's also critically important. And so all the businesses want to offer long-term support now because it's just so hard for businesses to be able to upgrade on the kind of cadence that Kubernetes releases on.
But realistically, the longer you make the support window for open-source Kubernetes, the more money that open source Kubernetes needs to burn actually doing the testing of those for every release cycle. So it makes the costs for the project go up astronomically very fast. And we're already operating as best we can with the budget that we have. So in open source, it's like, who's going to pay for that?
ABDEL SGHIOUAR: Yeah, that money has to come from somewhere, right? Somebody has to pay for it.
KASLIN FIELDS: Yep. And I liked his perspective about, as a business, how do you do open source as part of your business? And how it was before is not how it is now.
ABDEL SGHIOUAR: Yes. Yeah, definitely. I mean, they could have gone a different way and changed licenses, like a lot of other companies did. So I guess we're going to probably just see more of those going forward, like companies changing positions. And by position, it could be anything from licensing to the way they distribute their software because somehow somebody has to pay for that software being shipped and delivered and distributed to the end user. So yeah, we'll see. It's going to be interesting.
KASLIN FIELDS: And as an end user, if you had the choice between paying for software that is closed source with support and all of that versus paying for software that is open source, where you can see the code and the work that's being done on it in real time, but also get support on it, which one of those do you want?
ABDEL SGHIOUAR: I mean, we're obviously biased because we're going to go with the second choice.
KASLIN FIELDS: We're biased.
ABDEL SGHIOUAR: But yeah, it's an interesting conversation.
KASLIN FIELDS: Yeah, and I'm sure we'll talk about it more as we go through the year in 2025. Happy New Year by the way, Abdel, here at the very end.
ABDEL SGHIOUAR: Yeah, Happy New Year. I think we already said that, but Happy New Year. Yes, thank you.
KASLIN FIELDS: Yeah, more Happy New Year. I'm excited for what we're going to be doing in 2025.
ABDEL SGHIOUAR: Yeah, we have-- I mean, you have seen the emails. I have spent most of my last week just reaching out to people. There's a lot of exciting things coming.
KASLIN FIELDS: Whereas I've spent most of the last couple of weeks working on the report and looking forward to more information about how we did in 2024 soon. Check us out on social media. Yeah. So thank you very much, Abdel. Glad that we got to talk about Linkerd. And I'll see you next time.
[MUSIC PLAYING]
ABDEL SGHIOUAR: That brings us to the end of another episode. If you enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media @KubernetesPod or reach us by email at <kubernetespodcast@google.com>.
You can also check our website at kubernetespodcast.com, where you will find transcripts and show notes and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.
[MUSIC PLAYING]