#129 November 10, 2020

Linkerd, with Thomas Rampelberg

Hosts: Craig Box, Adam Glick

Thomas Rampelberg is a software engineer with Buoyant, creators of Linkerd, and a core maintainer of that project. He is also a co-author of the Service Mesh Interface and co-creator of DC/OS. He joins Craig and Adam to talk about the two former, and pour one out for the latter.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

ADAM GLICK: Hi, and welcome to the Kubernetes Podcast from Google. I'm Adam Glick.

CRAIG BOX: And I'm Craig Box.

[MUSIC PLAYING]

CRAIG BOX: It's been a bit of an emotional journey these last few days for those of us who have been refreshing the news more than we might on average. Sean Connery passed away. Alex Trebek passed away. That's got to be a great Celebrity Jeopardy happening in heaven.

ADAM GLICK: Yes, I heard a great term I'd never heard before. And someone said, you've got to be careful to just not get too much into doomscrolling. And I was like, yes. But I'd never heard that term before.

But it totally made sense to me. If you can just be on those refreshes and just scrolling through and at a certain point, it was interesting because, normally, I'm the one that is all up on the news. And my wife prefers to just kind of check it once every day or two. And just, she's much more measured about those things. And this past week, I just blocked it all. I was like, I don't want to look at it. I'll find out, but right now, there's just too many crazy things going on. And so, sometimes you need to take that break. That was wonderful, I will say.

CRAIG BOX: Well, if you're not refreshing the news every day, then you must have a Game of the Week recommendation for us.

ADAM GLICK: [LAUGHS] Yes, yes, indeed. I spent a little bit of that time poking around on a game called "Potion Explosion." If I were to think of what would best be the analogy, it's kind of like a board game version of "Bejeweled."

CRAIG BOX: OK.

ADAM GLICK: It's a really interesting design that actually uses this table full of marbles that roll down, and you pick out things that come together and create matches. Just a interesting design of something I'd never seen someone do anything like it, other than in a "Bejeweled" kind of online experience. It was actually a physical game. And so I've enjoyed playing that game this week. How about you? Any distractions besides doomscrolling?

CRAIG BOX: Well, what you said there reminds me of a game called "KerPlunk," which we used to play. I don't know if you've ever had that. But you have a plastic cylinder, which you stick a whole bunch of skewers into, into holes that sort of stick through, such that you'd make a little mesh of skewers. And then you pour a bunch of marbles into the top.

ADAM GLICK: Mm-hmm.

CRAIG BOX: And it's sort of a "Jenga" thing. You have to be the person who pulls out the stick and doesn't let too many marbles fall down.

ADAM GLICK: Yes.

CRAIG BOX: You had that growing up, too?

ADAM GLICK: Yes, I remember it well. Kind of one of those neat games. You'd pull it out, and you'd see how many, or if any, would fall through the bottom. And yeah, you'd try and not be the person that just empties the chute.

CRAIG BOX: Not sure how well that would translate to the digital world.

ADAM GLICK: [LAUGHS] Maybe not.

CRAIG BOX: Thank you to all of you who took the time to fill out our listener survey. We've started to look through your feedback and comments. And we very much look forward to using the information to make the podcast even better and more accessible to the entire cloud native community.

ADAM GLICK: Let's get to the news.

[MUSIC PLAYING]

ADAM GLICK: Linkerd 2.9 has been released. The new release brings a host of new features, including support for MTLS for all TCP connections, ARM processors support, and Kubernetes endpoint slices support. Additionally, you can now use multiple cores for the proxy runtime, as well as a Spanish language dashboard is now available. You can find out more about this release in our interview with Thomas Rempelberg later in this episode.

CRAIG BOX: Last week, we brought you the news that the container registry vendors had all published guidance on how to configure their software to work around Docker's new limits. AWS's announcement came in after our recording, but with the promise of a new public container registry service to be launched, quote, "within weeks." The new service promises anonymous access with higher limits than Docker Hub and higher limits still for those logged into an AWS account. It will also launch with a web directory.

ADAM GLICK: IBM has announced the addition of a code risk analyzer to their CI service called IBM Cloud Continuous Delivery. The new closed source tool checks for vulnerability, license management, and CIS issues on deployment configurations, as well as linting for security issues. Terraform files are also scanned to identify any security missed configurations. All of this is built within a role-based open policy agent framework for controlling such policies, which is designed to help developers be developers and security teams be security teams.

CRAIG BOX: Way back in episode 11, we spoke to Helm 2 chart maintainer Vic Iglesias. The circle of life brings us to the deprecation of the repositories he talked about. And he has written a blog post explaining what's happening. If you're using a chart from the old repository, you should be aware that they will be archived as of November the 13th with no further changes accepted. Existing charts have their readme file updated to say that they are no longer being maintained and to point to the new hosting location for the chart.

If there is not a new owner, the team asks that you consider becoming a maintainer if it's a chart that you use. You should still be able to use the archived version of the charts. But be aware that they are no longer maintained to that location and will not be patched for bugs or security issues. If you want to host your own Helm 2 chart repository, the team has released tools to help. All in all, if you haven't migrated to Helm 3, it's probably time you start to seriously look into this or consider using a different configuration utility.

ADAM GLICK: Finally, Kubernetes networking is getting put to the security test. Researchers at CyberArk have started an online series of in-depth analysis of attacks on Kubernetes Networking. In the first installment of their research, there are two targets.

The first is the container networking interface, using attack techniques such as ark poisoning and DNS spoofing. The attack allows a pod to interrupt the DNS queries from other pods on the same node. The attacking pod could provide alternate DNS responses, causing the attacked pod to send network traffic to an IP of the attacker's choosing.

Two mitigations for this attack exist, including the use of IP tables to block DNS responses from another pod or removing the net raw capabilities from the application, which will keep the attacking pod from being able to access raw sockets, which are required for this attack.

The second attack allows the bypassing of network and firewall policies by sending UDP packets to port 8472 on a node from a pod. An IP table rule blocking this access can mitigate the risk until a fix is implemented.

CyberArk has also hinted at their next update, which includes an attack that targets the routing features of network plugins, in specific, a new security issue in the Border Gateway Protocol, or BGP, with an example exploit using Calico.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Thomas Rampelberg is a software engineer with Buoyant, as well as a core maintainer of the Linkerd project. He is also a co-creator of DC/OS and the co-author of the Service Mesh Interface. Welcome to the show, Thomas.

THOMAS RAMPELBERG: Hey, thank you. Pleasure to be here.

CRAIG BOX: In a piece of completely coincidental timing, last week, D2iQ, the company formerly known as Mesosphere, announced that they are retiring the DC/OS software and moving fully towards Kubernetes. You were one of the creators of DC/OS, so I thought this might be an interesting opportunity to do a little post-mortem on the project. Can you start by telling us some of the stories from the earliest days?

THOMAS RAMPELBERG: My favorite story for the earliest of early days, DC/OS was a science project that I cooked up while I was at Mesosphere. In fact, we basically put it together to go hit a marketing date. The demo that we did originally was all totally real, except for one little facet, which was that I had created a fake framework. For those who don't know, DC/OS is built on top of Mesos. Mesos is a two-level scheduler, which is a little bit different than Kubernetes.

The high level is that you need to have a scheduler written for every workload that you want to do. So if you want to have a long-running workload, we had a framework called Marathon. Or if you wanted HDFS, you'd need to write a scheduler for that. There was no way I was going to write schedulers for all of the workloads we wanted to show off in the demo. And so I wrote one that faked it entirely. You just give it a name and a workload pattern, it would go and run on a cluster.

One of the first ones that I faked was actually named Kubernetes to show Kubernetes working on top of Mesos as part of the original DC/OS demo. There's a fun little history story there because that was back when I think Kubernetes had actually launched 0.1 way, way back in the day. And we were still trying to figure out how all of us worked together.

CRAIG BOX: I do remember Kubernetes showing up as something that was supported by Mesos and at Google thinking, well, that's very exciting. And then it very quickly disappeared from the marketing materials, never to be seen again.

THOMAS RAMPELBERG: Yeah, we had lots and lots of discussions early on about that. In my opinion, the Mesos scheduling model just doesn't quite work on top of Kubernetes. It's got enough of a different scheduling model that it's a little bit of tough fit, though.

After I left Mesosphere, a couple teams there did some really crazy awesome stuff around getting Kubernetes to actually run on top by basically virtualizing Kubelets inside of containers, which is kind of where they've been going with K3s stuff from Rancher as well, which is pretty cool.

CRAIG BOX: Do you have any feelings on product market fit over time? Do you think Kubernetes was the Linux to Mesos's Windows or anything like that?

THOMAS RAMPELBERG: We could go endlessly on my opinions of why Kubernetes won. I'll bring up two things, in my opinion. The easiest one to point out is that Mesos made it really hard to do what you wanted to. Pretty much everything people want from a container orchestrator is the same that you want from a cloud provider. You want something that gives you keep CPU and memory and runs a task. That's all you want.

CRAIG BOX: Right.

THOMAS RAMPELBERG: Mesos's two-level scheduler, while really cool, means that you have to think through resource offers, and how it fits together and separate workloads. And you start having a bunch of complexity on top of it. Kubernetes really focused very narrowly on that. I want CPU and memory. And I want it to go run somewhere. And I don't want to think about anything else. And nailed that.

The other thing that I think that Kubernetes really knocked it out of the park-- and I have just a ton of respect for Brandon putting this together-- is the fact that the Kubernetes API is purely async. With Mesos, we really spent a bunch of time trying to make a synchronous API to go over top of a distributed system.

While that is potentially easier to reason about as a user, it is basically a disservice to the user because they can't think about all of the distributed system important stuff. You can't think about the difference between a spec and a state, like you can with Kubernetes. And I think that those two things, in combination with some of the awesome community and Google work that's been happening with Kubernetes, really just made it so that Mesos didn't have a chance.

CRAIG BOX: Are there any lessons from Mesos that you think Kubernetes has not yet adopted and should?

THOMAS RAMPELBERG: Not yet is a strong one. The DC/OS installer is probably still, to this day, better than anything else in the Kubernetes ecosystem. And I think that that's heavily because most of the investment is on the cloud provider side, where you've got hosted solutions. Like, GKE is so much nicer than installing it on my VM somewhere. There's no reason to invest in it.

The DC/OS installer itself was still probably the best installer there is for something like a container orchestrator. We spent a bunch of time and effort making that really great. At this point, though, I'm not sure. The Kubernetes ecosystem is so big and has so much great stuff in it that far surpassed, in my opinion.

Potentially, the thing you could call out is the scale that Mesos worked at. We had it up over 60,000 nodes at one point in time, which is super cool. I think anyone running a 60,000-node cluster is completely insane.

CRAIG BOX: Especially on Kubernetes.

THOMAS RAMPELBERG: Oh, especially on Kubernetes. Your fault zone's one 60,000-node fault zone, that's insane-o scale. If that goes down, your business goes down. I can't-- no. [LAUGHS]

CRAIG BOX: Even just multiplying the cost of running a VM by 60,000 and just thinking how much that cluster will cost to run, it's mind-blowing.

ADAM GLICK: It does seem to go against the concept of a distributed system, to have one monolithic system that's running the distributed system underneath it.

THOMAS RAMPELBERG: There is some humor there. When I did the first DC/OS demo, I was really enjoying getting 1,000-, 2,000-, 3,000-node clusters on GCP. Being able to get a 3,000-node cluster in 10 minutes is just about the coolest thing ever. I don't know if you can still do that, but man, it was cool at the time.

ADAM GLICK: You moved on from the DC/OS work, and you're currently working on Linkerd, which is a service mesh. It's part of the CNCF, a little different than the Mesos work that was part of the Apache Foundation. And it's pretty new technology that not everyone is familiar with. Can we just start with an explainer of what is a service mesh and why someone would use it?

THOMAS RAMPELBERG: In the beginning, there were monoliths. And we all wrote our application as one big process. And it was good, and we loved it.

ADAM GLICK: I feel like the 2001 theme should be playing, the (SINGING) dun da, with the monolith bit.

[LAUGHTER]

CRAIG BOX: And then we wanted to deploy it to 60,000 nodes.

THOMAS RAMPELBERG: Not even 60,000 nodes. This is actually where I'm going. Then we introduced teams, and multiple teams worked on a monolith. And as soon as you had two or three teams working on a monolith, everybody stepped on each other's toes. Then microservices were born, which is an interesting point that I have. In my opinion, microservices solve a people problem and not a technology problem.

CRAIG BOX: Agreed.

THOMAS RAMPELBERG: It makes it so that teams can own their own destinies and work together. It does not solve a technology problem, in my very personal opinion.

CRAIG BOX: It solves a catering problem.

THOMAS RAMPELBERG: Interesting. Tell me more.

CRAIG BOX: Two pizzas.

THOMAS RAMPELBERG: Ah.

ADAM GLICK: I knew you were going there.

[LAUGHTER]

THOMAS RAMPELBERG: That's a great one. Once the microservices started to get split out, one of the things that we ended up noticing was that there was a bunch of common functionality that everyone needed to implement. You want to have observability that is standard across your whole infrastructure because learning isn't one level, it's multi-level.

And so you want to have multi-level learning. When you want to go put security in, you don't want random teams implementing security differently in their own stack. If you're implementing retries and timeouts, you want to provide that as a common set of functionality.

Linkerd, the first version of it, is actually built on top of a library called Finagle that came out of Twitter. And Finagle was how Twitter solved this problem. And because they were able to mandate a single language for all of their microservices, you would use Finagle as the HTTP client and server and then get all of this functionality out of the box.

That doesn't work in different organizations because microservices end up getting written in whatever language a team wants. Remember, again, in my opinion, the whole point of microservices is to give teams control of their own destiny. They should be able to pick whatever language they want to write in.

The service mesh came to being as part of Linkerd 1, which was a proxy that you'd run on every node. It would intercept all of the incoming and outgoing communication and provide the common observability layer, the common service discovery, common retries and timeouts and security.

ADAM GLICK: Why was that something that should be built outside of, say, the Kubernetes networking tools and capabilities networking code versus something that was built as part of it?

THOMAS RAMPELBERG: I think that it is a story more about how you go and compose tools more than anything else. If we focus on Kubernetes really providing that, I want to start and run a container somewhere, you start to see the larger ecosystem happen. A great example is CNI plug-ins provide the network for Kubernetes. It's not necessarily part of Kubernetes core, though there are some pieces there. It's a fuzzy question. kube-proxy is one of the best things that Kubernetes ever did.

CRAIG BOX: It's not a proxy anymore.

THOMAS RAMPELBERG: Right.

CRAIG BOX: It's badly named.

THOMAS RAMPELBERG: Did it ever start out being a proxy, or was it always IP tables based?

CRAIG BOX: No, it very much did. It was very, very slow, and it was replaced very, very quickly.

THOMAS RAMPELBERG: I don't think folks understand how powerful kube-proxy is. The fact that you have reliable service discovery and you don't need to worry about DNS failures in Kubernetes is something that we didn't do in DC/OS that we should have done very early on. It's a piece that folks don't think about and the cloud providers either. Kubernetes solved service discovery for folks. It's awesome, and kube-proxy is basically the reason for that.

CRAIG BOX: So that being said, you've got Kubernetes as a system that basically deploys containers and handles networking. You've got Mesos and Marathon as the system that just deploys containers. So you can argue that you could have composability and add services on, or you could argue that the whole package should do everything. How do you help make a decision like that?

THOMAS RAMPELBERG: It's such a hard question. In my opinion, it comes down to distributions. I really like to think of single projects doing too much. A great example is with Linkerd 2, we have not touched Ingress. Ingress is super important for Kubernetes. It's critical. You need it. But we haven't touched it, and we haven't touched it specifically because other folks are much better at Ingress than we will ever be.

We're really much more about that east-west load balancing instead of north-south load balancing. I really love being able to compose with other projects. That said, folks really want a bundle that works out of the box. And so that's where you start to talk about distributions and how the pieces fit together and wiring everything up so that it works really well together.

And I think that that's the story. Even GKE, they've got a checkbox for Calico these days, which isn't a Kubernetes project, but Calico works great out of the box on GKE. It's awesome.

CRAIG BOX: You've mentioned there Linkerd 1 and Linkerd 2. Linkerd 1 was the single proxy per node based on Finagle, as you mentioned. Linkerd 2 was a rewrite that was designed to enhance Kubernetes and be Kubernetes specific. Buoyant had built an open source project called Conduit, which was an experimental Kubernetes service mesh. And then, eventually, that became Linkerd 2. Can you tell us a little bit about the process of building Conduit and why you moved it from being its own thing to an evolution of Linkerd?

THOMAS RAMPELBERG: Probably the easiest thing to say is that Finagle is Scala, which means Linkerd 1 was built on top of the JVM. And while the JVM is awesome for a lot of things, being memory and CPU lightweight, it's not known for. You end up having to deal with garbage collection and things like that.

We took a look at where Linkerd 1 was and what Kubernetes provided and saw that we could produce something that was quite a bit more lightweight and fast if we integrated really closely with Kubernetes. Because Kubernetes gives you a lot of stuff out of the box, which means you can cut a ton of really interesting corners, like surface discovery.

We needed to go implement in Linkerd 1, either by integrating with Marathon or Console or Kubernetes or something like that. With Linkerd 2, because we have integrated so tightly with Kubernetes, we are able to use a lot of the native Kubernetes resources and have a much lighter cognitive model, make it easier to adopt and understand when things go wrong.

And so it was kind of that combination of moving over to a sidecar model means that you need to have much more lightweight proxies so that you don't have to pay a crazy amount of dollars, taking all of the UX lessons that we had learned from Linkerd 1 and moving them into Linkerd 2 to make it easier to support and easier to use.

CRAIG BOX: For that fast proxy in Linkerd 2, you implemented a proxy in the Rust programming language. That might be less familiar to some of our listeners. Can you talk a little bit about Rust and why you made that choice?

THOMAS RAMPELBERG: Rust is a systems programming language that is asynchronous and gives you a ton of control over memory. So you can do quite a bit of memory management. Early on, we worked with a project called Tokio. Hyper is top five on the HTTP benchmarks right now. Because of the compiler and how everything is set up in Rust, you can have quite a bit of guarantees around making sure that your memory is managed and you don't have any security issues.

Super small, fast, lightweight, plus the ability to go and not worry about memory issues is massive. When we were looking to do the rewrite, that was when Heartbleed was happening. And so we were really sensitive about moving into a language like C++, where we'd have to be very sensitive about buffer overflows and that kind of thing. Rust does a pretty good job of making sure that we don't need to worry about those quite as much.

In fact, CNCF funded a security audit for the proxy here, we're going to call it over the last 12 months. And it came back with two thumbs up, flying colors, which was really awesome to hear.

ADAM GLICK: You now have a Kubernetes version of Linkerd and Linkerd 2. How do you factor in non-Kubernetes-based workloads like VMs?

THOMAS RAMPELBERG: We don't right now. I continue to ask folks what they need. On some level, everyone is brownfield, right? You're not creating an application on Kubernetes out of clean cloth. Most organizations are migrating from a huge VM deployment over to Kubernetes and that kind of thing. So far, we haven't found anybody who can't be served by just putting something together more easily themselves.

The project that I'm most excited about on this at the moment, though, is actually K3s from Rancher, because it makes it so that you can actually run little Kubernetes deployments on top of VMs. And so as long as your application is containerized, you can shove it into a K3s cluster on your VM and then use the multi-cluster support that we have in Linkerd to wire it all up. And suddenly you have VM support, which is pretty cool.

CRAIG BOX: Now, you mentioned multi-cluster support in Linkerd there. That's something that you've been involved in, published a number of blog posts and done a number of conference talks about. There are a number of approaches that people have had in the past to the idea of running a workload across multiple clusters. There's the idea of federating clusters together. There's the idea of building a mesh that runs across multiple clusters. And there are the approaches that you've settled on in Linkerd. Can you talk through the decision process as to how you got from A to B to C in building that out?

THOMAS RAMPELBERG: In my blog post, the first requirement that I talk about is the hierarchical networks. The second one is maintaining independent state. If we talk about the CAP theorem, the partition there is the most important one. Network partitions happen. They happen regularly. And especially if you're talking about doing multi-region and multi-cluster or even an edge deployment, you have to plan for partitions.

The state needs to be local, or else you start to get failures. If you have delegated to a single master cluster and you get a partition, I would hate for my workloads to stop. We have a user of Linkerd who uses Linkerd at the edge and needs the ability, if a backhoe goes and tears through one of their fiber connections, to go and do everything local so that the system stays up.

The third requirement I had is an independent control plane. And that kind of follows from the independent state. But it's kind of the next level. You want to be able to make control plane decisions locally because they're going to be locally optimal on that cluster and because you want to make sure that it works during failures.

The whole point of how we put together the Linkerd multi-cluster was to make it so that it's a primitive that you can go and build on top of to go and put interesting tools together. Just because we haven't made a federation an obvious thing, you could do it. There's no reason why you couldn't automate that federation side of things.

We also have a unique advantage in that because the Linkerd control plane is so lightweight and easy to administer, it's simple to just throw it everywhere and then figure out how you want to wire it together for your specific use case.

ADAM GLICK: We talked about the transition from Linkerd to Linkerd version 2. Are people still running version 1?

THOMAS RAMPELBERG: There are people still running version 1, in particular, folks who are not on Kubernetes yet. Quite a few happy HashiCorp Nomad users, quite a few happy Mesos users for a very short period of time here. The world is not moved over entirely to Kubernetes. It doesn't work for every workload. I think I actually read a blog post recently by Cloudflare saying that they love Nomad internally and use it pretty extensively.

ADAM GLICK: Are there use cases such as Nomad for people to continue to use Linkerd version 1?

THOMAS RAMPELBERG: I think so. We're not putting a ton of effort into it. And so I'm not going to say end of life, but it's definitely bug fixes only at this point in time. I would always strongly recommend folks just to get on to Kubernetes, because that's where the ecosystem is going. But if that's not an option, definitely, Linkerd 1 is a solid solution. It's running at insane-o scale.

CRAIG BOX: Just like Mesos.

THOMAS RAMPELBERG: Just like Mesos, exactly.

CRAIG BOX: Now in the 2.x series, Linkerd 2.9 has just come out. What's new in that release?

THOMAS RAMPELBERG: The biggest theme of 2.9 is MTLS for TCP. We came out with MTLS automatically out of the box with no configuration almost a year and a half ago at this point. But it only ever worked for HTTP. And the reason for that is because of how we do service discovery, which was traditionally pulling the host header for the service that you were looking for out of the HTTP communication. As part of Linkerd 2.9, we're now doing it off of the IP addresses themselves, which gives us a lot more flexibility on the types of things that we can do MTLS on top of.

So that's the headliner, but I do want to call out a couple other super cool things that we landed. One of them is service topology support. Mate, one of our Community Bridge students over the summer, put that together. For those that don't know, service topology has landed, I think, in Kubernetes 1.17-- don't quote me on it.

But that makes it so that you can control routing decisions on a node level, which means that-- let's say you've got a single cluster that is cross-region or cross availability zone. You don't want to make calls out to the other region unless you absolutely have to. Service topology makes it so that you could route a connection between two microservices on the same node, or you could do it inside a region to really improve your performance. That's built on top of endpoint slices, which is another awesome recent Kubernetes feature.

From a performance angle, probably one of the coolest things in Linkerd 2.9 is that traditionally, our proxy has been single-threaded. And we have done that because it is making sure that the system is correct. Once you add multiple threads in, it's very difficult. We spent a bunch of time working with the Tokio team to get the Tokio runtime work really well multi-threaded. And now that we've rolled that out, you can consume as many CPUs as you want in the Linkerd proxy itself, which means that you can push pretty crazy RPS at this point in time.

CRAIG BOX: In podcast parlance, that's like having a gas station with multiple bathrooms.

ADAM GLICK: [LAUGHS] Nice callback to our chat with Mike Denise. You're also the creator of the Service Mesh Interface. That was announced last year. Can you explain the purpose of what the SMI was built to do?

THOMAS RAMPELBERG: SMI is really similar to CNI in how you want to think about it in that CNI was made so that vendors could build out their own networking implementations. And then Kubernetes could interact with it. On the other side of things, folks could build tools against the CNI interface and have an ecosystem around that, so that when a new vendor comes out, you don't need to integrate directly with their CNI implementation.

I really view SMI as that same kind of thing. As a vendor, it gives you a common set of APIs to go integrate with. You don't need to think about it. In fact, Nginx just released a service mesh of their own, built on top of the Nginx proxy. They implement, I believe, almost all Service Mesh Interface. So they didn't need to think about how the API was built. They could just go integrate their awesome solution on top of it.

And on the other side of things, there are some really, really cool tools in the service mesh space. Flagger is one that I'd love to call out. That's done by Stefan over at Weaveworks, and it gives you canary rollouts on Kubernetes. Super fantastic project. I can't say enough good things about it. Server Mesh Interface makes it so that he only has one integration potentially that he would need to integrate with to go and give all of the service meshes that functionality.

The one that hasn't happened yet, but I am super excited, is Kiali, which is another absolutely fantastic service mesh community project done by, I believe, Red Hat. They do some really great visualization of what's going on in your cluster. And I am dying to have Kiali integration in SMI so that we can get that with Linkerd, again, so that the Kiali folks don't need to go integrate with 30 different service meshes.

CRAIG BOX: We spoke to Antonin Bas last week, who builds a CNI plugin. And he explained to us that the CNI has three APIs, attach a network, detach one-- start and stop-- and query whether it's running. Is it fair to assume SMI has more APIs than that?

THOMAS RAMPELBERG: I think we've got four, actually. We don't have very many APIs in SMI.

CRAIG BOX: Not that many more.

THOMAS RAMPELBERG: Part of it is spending a bunch of time trying to figure out what the 90% use case is. Service meshes are such a Swiss army knife because they sit in the data plane. You can do almost anything. Go take a look at some of the really cool stuff that Istio does, especially with all of the new ASM support. It's crazy the cool things that you can do there.

I was chatting with someone who's actually been using Istio on mainframes to go and get their, we're going to call them ultra legacy workloads, pulled into the Kubernetes world. It's such a cool use case. But I don't think that very many folks have a mainframe sitting in their data center that they need to hook up to Kubernetes. With Service Mesh Interface, we really try to focus on that 90% use case. And we really defined three high-level pieces there-- observability being one, canary rollouts, or what we call traffic splits, being number two, and policy being number three. So being able to say that Service A can talk to Service B, yes or no.

Under that, we've got fourth API that's really just a CRD that allows you to define traffic. You want to write policy not just on if service A can talk to service B, but you also want to define the protocol that they're chatting and what type of traffic is valid there. You would say that this service can talk to a specific endpoint on an HTTP service. And so those are the four main use cases that SMI does today.

CRAIG BOX: A lot of service meshes have a laundry list of features that they can implement. How do you implement the last 10%, which is different between all the different interface implementations?

THOMAS RAMPELBERG: I think that you should just have your own CRD definitions or your own configuration to figure it out. It's one of those where no single shoe fits. That last 10% is definitely the long tail. On a service mesh by service mesh basis, I think you should pick what works best for your service mesh. Some folks are going to want to use CRDs. Console is a really great example of that doesn't really work for them because they've got the historical console stuff. So they've got their own kind of configuration.

And I'm not sure it makes sense for them to move that all into Kubernetes someday because they are so spread out across VMs and Kubernetes and Nomad and the rest of that. So that last 10%, I think, really comes down to what differentiator you want your service mesh to be. And I think that's actually powerful to have the flexibility to do that.

One of the things that I did in SMI was really spend a bunch of time on object references specifically to allow for service meshes to go create their own objects that they can reference into the SMI definitions so that it looks and feels like SMI, but you have a lot more room to define what you need to go do yourself.

ADAM GLICK: You've worked in what we now call the application modernization space for quite a while. Always working on projects really before they became mainstream with container orchestration in DC/OS, early in that space. Linkerd was early in the service mesh space. You seem to have a bit of a crystal ball as to where this world is going. So what comes next?

THOMAS RAMPELBERG: Oh--

ADAM GLICK: And if you have any stock picks, just slide them right in there

THOMAS RAMPELBERG: Do not let me pick stocks.

ADAM GLICK: [LAUGHS] Let's stick to technology.

THOMAS RAMPELBERG: Yeah, definitely. Something that's been a big issue argument in the Kubernetes space for quite a while now is that folks are YAML programmers. And everyone hates on the YAML, which makes me sad inside because-- and I have a long rant on this that I actually need to write down. But Kubernetes is really a domain-specific database. And you need to look at it that way.

The YAML is literally writing a select statement or an insert statement for a database. That's what the YAML is. And it's awesome that it is already configured for how it is. And it's awesome that it's got a schema. But the YAML is you writing an insert statement into Kubernetes.

CRAIG BOX: Doesn't that make it more literature than programming?

THOMAS RAMPELBERG: Valid point, valid point. The implication there is that if you go look at what's happened in other programming worlds over the last couple decades, Ruby on Rails is a great example of this. Ruby on Rails said it's ridiculous that you have to write an insert statement for everything. Here is a ORM that lets you write programming language and have it go into the database.

The thing that I am super excited about watching and seeing as part of the Kubernetes ecosystem is people building abstractions that are specific to developers and operators on top of the Kubernetes database. One of the ones that I am personally most passionate about-- and I think we're going to get to it-- is developer workflow tooling.

Kubernetes itself has been super focused on infrastructure and running the containers because Docker came along and knocked the developer experience out of the park. If we're honest, developing on top of Kubernetes proper with no tooling is very painful. What I'm really excited about is a lot of the startups that have been working now-- like Tilt, for example, and Okteto-- on building out the developer ecosystem and making it so that developers can work super fast on top of Kubernetes, specifically because we've now got this interface that's common everywhere.

So you can build a tool that works for a bunch of different companies. And developers can start to go and build on top of remote clusters. So the size of your laptop isn't limiting anymore. I have a bunch of friends working at larger companies where they have so many microservices that you just can't develop on a laptop anymore. They give them a server. Here's a server that's massive. Good luck doing your development.

It's crazy to me that that's the thing when we can just spin up a GKE cluster and go do all of our development remotely. And so, I am very, very, very excited about that swing back that I hope to see soon into the developer workflow space and making that awesome on top of Kubernetes.

CRAIG BOX: With the service mesh, we get observability of the workloads we're running. With Kubernetes, we get CPU with memory provided for them. Should we be deploying on Fridays?

THOMAS RAMPELBERG: [LAUGHS] No, we should not be deploying on Fridays.

CRAIG BOX: Why not?

THOMAS RAMPELBERG: Deploying on Fridays is a human problem. Let me tell you a story about my first startup. This was years and years ago. And we were running anti-spam antivirus as a service. Had just gone through a pretty major rewrite. We rewrote the entire application, basically, and moved all of the hardware out of where we had it into a new colo. I think we finished up deploying this and doing tests on it at 10:00 PM. It actually wasn't a Friday. I think it was that Sunday.

I had Burning Man the next day. And so I jumped in the car and drove out to Burning Man. To the eternal credit of my co-founder, while I was off the grid there for a day before I checked back in, he had to go restart the whole stack every 45 minutes because we had a very unique timeout issue that would cascade through the system.

ADAM GLICK: Oh, my.

THOMAS RAMPELBERG: Once I gave him a phone call a day or two later, he's like, dude, you have to fix this. And so I spent a day or two at Burning Man in very friendly person's plywood box with a cantenna Wi-Fi pointed at Gerlach, which is quite a bit away from where Burning Man is, on 30% packet loss, trying to debug my timeout issue. We finally solved it, and everything was stable. But if I had to do it again, I would not go to Burning Man the next day.

CRAIG BOX: Well, possibly you'd sign up for Elon's satellite internet beforehand.

THOMAS RAMPELBERG: I'm sure that things have changed. I've heard they have cell phone service out there now. And so, it would be a little bit better. But man, don't release on Friday.

CRAIG BOX: Doesn't that kind of defy the point of Burning Man?

THOMAS RAMPELBERG: A little bit.

ADAM GLICK: All's well that ends well. And if I recall correctly, things did work out well for that startup, correct?

THOMAS RAMPELBERG: Yes, definitely.

ADAM GLICK: Good to see that you can make it through, even if you've learned that deploying right before going on vacation holiday, Burning Man, whatever, is probably not the best deployment time. Amazing story. Good advice for everyone. And thank you so much for joining us today, Thomas.

THOMAS RAMPELBERG: Thank you.

CRAIG BOX: You can find Thomas Rampelberg on Twitter at @grampelberg.

ADAM GLICK: How does G stand for Thomas?

THOMAS RAMPELBERG: It doesn't. Trampelberg sounds a little bit like trample. I have a bit of a reputation for being grumpy. And so Grampelberg sounds like a very grumpy individual, and I was into it.

CRAIG BOX: The G is for grumpy. You heard it here first.

[MUSIC PLAYING]

ADAM GLICK: Thanks for listening. As always, if you've enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter at @KubernetesPod, or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at kubernetespodcast.com, where you will find transcripts and show notes, as well as links to subscribe. Next week is KubeCon week, so we'll, quote, unquote, "virtually see you there."

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]