#203 June 19, 2023
This week we speak to Justin Cormack the CTO of Docker. We talked about WASM (or WebAssembly Modules), Docker support for running WASM apps and the future of the technology.
Do you have something cool to share? Some questions? Let us know:
KASLIN FIELDS: Hello, and welcome to "The Kubernetes Podcast" from Google. I'm your host, Kaslin Fields. And--
ABDEL SGHIOUAR: I am Abdel Sghiouar.
This week, we spoke to Justin Cormack, the CTO of Docker. We talked about Wasm, or WebAssembly modules, Docker support for running Wasm apps, and the future of the technology.
KASLIN FIELDS: But first, let's get to the news.
ABDEL SGHIOUAR: The German version of the Cloud Native Glossary is live. The Cloud Native Glossary is a project led by the CNCF as an effort to make cloud native terms easy to understand without requiring prior technical knowledge. The community is prioritizing localization in multiple languages. The glossary is now available in 10 languages besides English.
KASLIN FIELDS: The Kyverno project released version 1.10. This is a major release for the Kubernetes native policy engine. New features focus on scalability, extensibility, and security.
ABDEL SGHIOUAR: Chainguard moved from GitHub container registry to a self-managed one for database images. The new registry is designed with security out of the box, provide cloud events for pull and push, and is built on top of Cloudflare R2, which has a $0 charge for egress.
KASLIN FIELDS: Microsoft announced the general availability of its Azure Linux. The Linux distro is an open-source container host for the Azure Kubernetes service. The distro is based on CBL, or Common Base Linux, and is designed to have a small footprint. The core image is 400 megabytes in size and has 300 packages pre-installed.
ABDEL SGHIOUAR: Google's new C3 machine family is available for GKE standard clusters running version 1.22 or later. These new machine types are powered by the fourth-generation Intel Xeon scalable processor, codename Sapphire Rapids, DDR5 memory, and Google custom Intel infrastructure processing engine.
The VMs are designed to support network configuration with up to 200 gigabytes per second bandwidth, support Intel AMX for ML workloads, and a whole set of features for security and performance. Check out the show notes for more details.
KASLIN FIELDS: Amazon announced support for pull through cache from Kubernetes's official registry to the Amazon container registry. Customers can configure ACR to pull images from the upstream registry.k8s.io and cache them, removing dependency on external registries for commonly used images.
ABDEL SGHIOUAR: The Linux Foundation announced WasmCon, a conference focused on WebAssembly technologies. The event will take place between September 6 and 7 in Bellevue, Washington. The call for proposal is open until June 25. Links for the conference and the CFP are in the show notes.
KASLIN FIELDS: Cilium announced the general availability of their Intro to Cilium course on the Linux Foundation training and certification website. Cilium is an open source cloud native solution for networking, observability, and security. The course introduces the incubating project and demonstrates its features. Check out the link in the show notes for more information.
And that's the news.
ABDEL SGHIOUAR: Welcome to a new episode of "The Kubernetes Podcast" with Google. I am your host, Abdel. And today we have the pleasure to talk to Justin.
JUSTIN CORMACK: Hello. Nice to be here, in person at KubeCon. It's a lovely time.
ABDEL SGHIOUAR: Nice. Yeah. So we are here at Amsterdam. We are in our office, on a Monday, we're recording this, and we are just, I would say, pre-KubeCon chaos.
JUSTIN CORMACK: Yes. Just haven't collected my pass yet. I haven't seen the chaos yet. But I'll probably do that after this.
ABDEL SGHIOUAR: Yeah. I haven't collected my pass yet. And you know, usually KubeCon is just very, very chaotic. Scott tweeted about the topic we're going to talk about today. And then I tweeted at him and said, hey, we'd like to have somebody on the show to talk about it.
We have a rule in the show that we don't want to have the same guest twice. So that's why I said to Scott, we want to have somebody else, because Scott was already on the show once. And then he volunteered you, I guess.
JUSTIN CORMACK: Well, it's nice to be here. I can't believe I haven't been on here before, you know. It's an opportunity.
ABDEL SGHIOUAR: We've been chatting earlier. You've been at the company for, like, seven years.
JUSTIN CORMACK: Yeah.
ABDEL SGHIOUAR: Well, here we are. So let's start with who you are, and what do you do?
JUSTIN CORMACK: Yeah. So I'm Justin Cormack. I'm the CTO at Docker. As I've said, I've been at Docker for 7 and 1/2 years now. So I've had a bunch of different roles, been CTO for a few years now. And yeah, Docker has been through a lot of change.
But the last few years, we've really been focusing on the developer story, what's good for developers, how to help developers with better products for them. And that's really my focus.
ABDEL SGHIOUAR: Nice. Nice. We've been chatting earlier that you also are responsible for everything that Docker provides as service, so everything hosted, essentially, products, right?
JUSTIN CORMACK: Yeah. We have a mix of products, like Docker Engine, Docker Desktop for locally, and we have hosted products Docker Hub, and then services like Scout for supply chain security and things that we've added, and really a growing range of services to support you as a developer.
ABDEL SGHIOUAR: Nice. So I think Docker is pretty much, at this stage, known to everyone who is doing containers. So I think we don't have to expand on the topic of what Docker is. But today we're going to talk about something that we've been hearing about for the last, let's say, a year and a half, maybe, two years.
JUSTIN CORMACK: Unless you were really early, yeah, that's probably-- I think the last year is really when people started saying the WebAssembly word, or--
ABDEL SGHIOUAR: Wasm.
JUSTIN CORMACK: Wasm, its informal name.
ABDEL SGHIOUAR: Exactly. So let's start with, for those who doesn't know what it is, what is Wasm?
JUSTIN CORMACK: Yeah, WebAssembly, it's a-- there's kind of a long story about WebAssembly. It's all about the browser. And really for a long time with the browser, people have been trying to work out how to run bigger and more complex code in the browser.
And there's been a long history of ways to do this. I think Google were kind of early in this. I think Gmail was one of the early examples of compiled to the browser.
ABDEL SGHIOUAR: Native coding in the browser, yeah.
And so they called it WebAssembly because it's like assembly-- because you compile your programming language to assembly when you're compiling it. And it's for the web. So that was kind of where the name came from.
ABDEL SGHIOUAR: So that's the WebAssembly part in browser. And then what's happened in the last, maybe, year or so is people started saying, why can't we take this and just run it outside of the browser, right?
JUSTIN CORMACK: Yeah. And again, there's a long history of taking browser tech and running it outside the browser. I mean, node.js is probably the original example, but there are others as well. And I think a lot of it comes down to-- the browser security model is the most heavily tested security boundary there is, with billions of users and a huge amount of security engineering work being done on that.
One is the isolation technology and having that--
ABDEL SGHIOUAR: The sandboxing.
But again, V8's a big thing to embed. And it's difficult. And it requires-- it's not what it's designed for. It needs a lot of maintenance, if you're going to support V8 as part of your application.
So the WebAssembly story, again, is part of that kind of story, where you take the web technology and say you want to run this portable sandbox code elsewhere. So you can just take the WebAssembly sandbox piece without the whole of V8.
And the WebAssembly sandbox is designed to support the whole browser sandboxing model and set in that framework. And then you could basically run WebAssembly code. That path from browser to server has been done quite a lot of times.
ABDEL SGHIOUAR: Yeah. And one thing that you touch on is this concept of sandboxing, which to somebody who's coming from the infrastructure space, the way I can think about it is, it's really a VM. It's kind of a virtual machine, which is sort of, from an infrastructure standpoint, it's the strongest kind of isolation or sandboxing you can have.
JUSTIN CORMACK: Yeah. I mean, the term VM is kind of used in a few different ways.
ABDEL SGHIOUAR: Yes.
JUSTIN CORMACK: There's VM in the sense of VMware, and the hardware virtualization, and that boundary. And then there's virtual machine in terms of the Java virtual machine. And we tend to call them all virtual machines. And they are virtual machines, in the original sense of virtualizing something else.
But strictly WebAssembly is more like the JVM type of virtual machine than it is-- the security boundary doesn't use the hardware virtualization instructions that chips have. So it's not the same kind of boundary as you get by having a separate VM.
However, it has been designed to be secure. Everything about the design has been designed to make sure that it is a strong, not a weak, boundary.
I think when Java first happened, people thought that Java would be a strong security boundary. But there was kind of a lot of issues with the design of the JVM, and the standard library, and how the standard library worked. And it never really got there. WebAssembly has got a lot of lessons learned from the history of how to do pure software isolation.
There was another Google project, as well, that-- in the history of these isolation-- it was a modified version of x86 assembly with instruction constraints.
ABDEL SGHIOUAR: Is it-- not gVisor. gVisor's contained--
JUSTIN CORMACK: No, no. Earlier than gVisor.
ABDEL SGHIOUAR: So we'll have to figure out-- probably there is. I don't know which one.
JUSTIN CORMACK: As I said, there's been a long history of these things. And gVisor's another interesting one, because gVisor, again, is-- it has different implementation backends, but again, it's a software implementation, at the system call, API level, but it has support for hardware virtualization to strengthen the isolation.
So there's a kind of big set of these different isolation mechanisms with different strengths. And it's important when you're using things to think about what kind of security boundary you need, how much.
I think that-- when containers started there was an awful lot of talk about containers versus VMs, which is more secure, things like that, and for most use cases, if you skip over all the history and come to now, we've decided that if you want really strong multitenant isolation, containers are not sufficient.
ABDEL SGHIOUAR: Yes.
JUSTIN CORMACK: But for many purposes, they're absolutely fine. They're pretty--
ABDEL SGHIOUAR: If you don't care about isolation.
JUSTIN CORMACK: If you don't care about multitenant. For single tenant, they're fine. And for many users, they're fine.
But then we've also worked out that you can wrap these layers. And the valuable thing is the standardization and the portability of the images, and the containers, and the APIs. And you can-- if you use-- if it's not like Google Cloud Run, it's a container as far as you're concerned, but the implementation includes a VM for isolation. So it's a combination of the two.
And I think we'll see a lot of the same thing with WebAssembly. Google, again, has the published guideline that untrusted code has to have two layers of isolation between it, and the raw hardware, and the host.
And so for WebAssembly, one of those layers can be the WebAssembly boundary, perhaps. And then the other one could be-- maybe it could be gVisor, or it could be a VM, or a container. There's lots of options, depending on your detailed security requirements and these kind of requirements.
But there is the security boundary that's come from the browser in there, which we know is a strong one. So we have got at least one strong boundary there.
ABDEL SGHIOUAR: One layer.
JUSTIN CORMACK: If you need a second layer, we have a lot of technologies now for building more security boundary layers. But we've got the one. And I think that's one important part of the Wasm story. It's not the only bit.
I think the other piece is the real portability story that comes with it. Wasm really can run anywhere. It can run in the browser. It can run on a server. It can run embedded in Istio. It can run--
ABDEL SGHIOUAR: Or in the Raspberry Pi, or whatever.
JUSTIN CORMACK: Yeah. So we've got this real portability story that's very, very broad.
ABDEL SGHIOUAR: Yeah.
ABDEL SGHIOUAR: Yeah. You have Rust, C++, but there are also a bunch of other programming languages building their compiler in such a way that you can build as a target or Wasm runtimes
JUSTIN CORMACK: Yeah. Most languages.
ABDEL SGHIOUAR: Yeah. Most languages.
JUSTIN CORMACK: There's Ruby, there's--
ABDEL SGHIOUAR: It's like 40 plus or something.
JUSTIN CORMACK: --JVM, there's all sorts now.
ABDEL SGHIOUAR: Yeah. So I think we touched on a lot of things. And I want to depict one thing at a time.
But just going back a little bit through the conversation, one thing that I couldn't help myself from thinking is-- and because you mentioned the JVM and difference between JVM from a software-layer perspective and virtual machines from hardware-layer perspective-- it feels like Wasm was kind of conceptually a virtual machine, at least from a software-layer perspective.
But you still need extra stuff if you wanted to run as a system component. So if you need to talk to it through the network, or bind it to a port.
JUSTIN CORMACK: Yeah. So Wasm made it-- I think it is part of the learning from Java. With Java, a lot of the security issues were in the standard library code, not the raw JVM code. And I think Wasm made a decision that-- and it also fits with the web platform model, that you have the VM, but it doesn't natively have the ability to do any kind of output or input or talk to anything else. It's a purely isolated system.
But it's a communication channel between two bits of software, one on the inside of the sandbox and one on the outside-- well, it's also probably in a sandbox as well, because everything's sandboxed. But conceptually, it's just a new place you can talk to.
And WebAssembly has exactly the same model that it doesn't natively have any APIs to talk to the outside world. But it has the ability that when you set up the runtime, you can say, look, here are the things in the outside world you can talk to. These are their APIs. And go talk to them and do what you want.
So in its raw state, it's totally opinionless about whether it's running inside Istio, or a browser, or a server with some-- But when you create it, you decide, what's the things I want it to talk to, what's this application use case for, and what interfaces shall I give it? And then you decide those.
So it's quite a good model because it fits with a security boundary. And it's just a kind of IPC-type model, usually, where it's talking into the socket. But it doesn't give you anything up front.
ABDEL SGHIOUAR: Yeah. And so part of that is the effort called WASI, the WebAssembly System Interface, right?
JUSTIN CORMACK: Yeah.
ABDEL SGHIOUAR: By the Bytecode Alliance, I think.
JUSTIN CORMACK: That's right.
ABDEL SGHIOUAR: So WASI is essentially this interface between Wasm, as the sandbox where your application run, and the network, whatever other stuff it needs to talk to, right?
JUSTIN CORMACK: Yeah. It's kind of an interesting thing, because it also has a really long history. It actually originated out of a kind of lineage of things a long time ago, with FreeBSD, and the Capsicum Project, and various, like a standardized system call interface for BSD.
And so it basically is a sort of POSIX-y, kind of UNIX-y view of the world, but not quite. But it's close enough that it's been a target that you can compile a lot of standard UNIX-y, Linux-y programs to. And it's got enough of the kind of base system calls that you can basically build a libc that does many things.
But it's still evolving. And the original, the first release, really had file system calls, and the file system interface, and a few things like random numbers, and time, maybe clocks, things like that, and not very much for networking.
ABDEL SGHIOUAR: Yeah. Not yet, I think.
JUSTIN CORMACK: Yeah. I mean, technically you can pass in a network socket, I think. But anyway.
JUSTIN CORMACK: So what's happened is that a lot of people have added their own extensions for how networking should work while it evolves. But the standard is kind of moving into the next version, which will have networking support, because that's obviously been--
ABDEL SGHIOUAR: Well, that's one of the major things--
JUSTIN CORMACK: It's been a big issue. On a server, you kind of want to do networking. It's useful, you know.
ABDEL SGHIOUAR: I mean, if you want to bring Wasm to the cloud, you need that concept.
JUSTIN CORMACK: Yeah. You do. Yeah.
ABDEL SGHIOUAR: And so then talking about that-- so you will correct me if I'm wrong, but I think that probably for a lot of people, once they would hear Docker say, we support Wasm with Docker, the first thing that will pop in their heads-- and what happened to me-- is oh, let's just take a Wasm module and shove it inside the container. And I feel like that's not what you're doing, right?
JUSTIN CORMACK: No, but-- so there's a number of things we've been trying to do. And there's a gradual process of things we need to do going forward, as well. So we've been working with Microsoft and other people on containerd shared--
ABDEL SGHIOUAR: Supports, yeah.
JUSTIN CORMACK: --that supports Wasm runtimes. So right now, yeah, it does actually run the stuff in a container most of the-- But it's not necessary. We can fix that later.
ABDEL SGHIOUAR: You can make it run as a standalone binary, essentially, outside of a container.
JUSTIN CORMACK: Yeah. It's really a runtime. It's just incremental bits about-- containerd is a very general tool for running things, and managing them, and pulling, and in particular taking OCI Docker images, and unpacking them, and then handing them to the bit that does the actual run, which normally is runc, which runs containers.
But it's a very pluggable ecosystem. And so you can also run the WASI bit. And that doesn't have to run in a container. It doesn't necessarily have to run on Linux necessarily, either. It's multi-platform. But there's work to be done still to kind of unpack all this, and it's up to us on making it work fully how we want.
So the Docker Desktop stuff runs the stuff under Linux. But that's just the things that we need to tidy up. But a lot of the things we wanted to do was show that you've could really have the same kind of workflows with Wasm. You could still run Wasm with Docker Run and have a way that you could uniformly run it.
But that doesn't mean that-- like with containers, when you run them in Google Cloud Run, you don't really know-- it doesn't matter how they run. It just runs. And same with Wasm. It's about having standards for how it's packaged, and the interfaces it has, and therefore you'll know that you're going to be able to run it somewhere.
It's not quite 100% smooth like that, because there's different runtimes that have-- mainly because they have different WASI networking implementations right now, and things like that.
ABDEL SGHIOUAR: Yeah. The ecosystem is still quite fragmented.
JUSTIN CORMACK: Yeah. But the model we're working towards is you can compile your stuff as Wasm, and you can run it anywhere in the same way you can with containers. Because I think that's what's going to lead to really wide adoption is where it's very simple, straightforward, just run it everywhere, just use it as a target, and that gives you that universality of runtime that we have with containers.
ABDEL SGHIOUAR: Yeah, a run anywhere kind of model.
JUSTIN CORMACK: Yeah.
ABDEL SGHIOUAR: And so this is-- we have to mention also is this is still beta in Docker Desktop 4.15, right? The support is still in beta. So it's still early work that you are-- it's ongoing, essentially.
JUSTIN CORMACK: Yeah. We did another release just a few weeks ago, as well, with multi-runtime support. So there was a kind of beta 2 in 4.18 as well. So we're adding to it. We're just-- there's lots of things to clean up.
But yeah, the big thing we added was support for multi-runtime, because we started to work with WasmEdge, which is a CNCF project. But we now support a whole-- as we're talking about, because things like networking are not standardized.
These runtimes have slightly different APIs they've chosen to have. So right now choosing a runtime basically means choosing the API set you're going to run with, really. The actual runtime for the real Wasm stuff, it might perform slightly differently if it's different runtime. But the real difference is around the API sets you're providing.
So we've offered a choice there so that if you want to run the Fermyon Spin runtime, you can run that, so if you're writing something to target that. But we see that as a kind of temporary thing while we work out the standard API sets so that everyone can run everywhere.
Because long term, we don't want this sort of fragmented ecosystem where you have to think when you build something, what am I building it for? You want it to just be able to run everywhere.
ABDEL SGHIOUAR: Yeah. That's actually one of the promises of Wasm, which I think is one of the most interesting things about it, is this single binary, regardless of the platform. You just build a [? single ?] artifacts, basically, and then it will just run anywhere. And that's, I would say, the promise.
JUSTIN CORMACK: Yeah. But because-- as we talked about earlier, about when you set up your Wasm runtime, you choose the APIs you provide. That's the thing that's-- they're not the same, necessarily, but there's a lot of work going on the sense for how you specify the APIs available, the set of APIs.
And there's the WASI standardization process, and the next iterations with more networking, which will meet a lot of the reasons people have diverged in the short run. And potentially, you can write some-- you could, although hardly anyone's doing this at the moment, you could write a Wasm program that adapts to what is available to it and chooses different APIs, things like that.
So there's a bunch of options. But it's still kind of new at the moment, so people are still--
ABDEL SGHIOUAR: Yeah. It feels to me like-- another way of thinking about it is, basically you are sort of decoupling the app itself from its environment. Because the way you're describing it looks like you would be able to write your program, and then at runtime you can define, these are the set of APIs I need, or these are a set of APIs available in the environment.
And then I can do file system, network calls, blah, blah, blah. Right? So it feels like it's interesting in the sense that it's-- still the app itself is just standard. It's a single app that can run anywhere. And then the app could adapt to environments. Or is it the other way around?
JUSTIN CORMACK: Well, again, I think that there's different kind of ways that this might end up. And I think there's--
ABDEL SGHIOUAR: A lot of speculations here.
JUSTIN CORMACK: But there's a lot of-- there's a bunch of work using these interfaces in order to specify composition of multiple components. So you have a component that provides the interface that you attach to the one that requires it. And so you have this, that's the WebAssembly component model.
ABDEL SGHIOUAR: Yeah.
JUSTIN CORMACK: And so if your application needs a database API, because you want it to talk to a database, and this application provides that, you can connect them together. And they can talk. So that's another way people are thinking about how to use this to modularize components.
So it doesn't have to necessarily be that just your app talks to the standard APIs. It could also say, oh, and I need this-- I need a Redis endpoint. And I need-- and someone else can provide it. And so on. So there's a number of different ways people are thinking about what this might end up like.
And a lot of people are quite excited by the component model. There's a lot of options, again, about what the future might look like. It's still really at this stage where we're exploring how to build an ecosystem, and what are the things that make it different, and valuable, and help developers develop applications more easily using these new capabilities. How are we going to put these things together?
And I think that another thing that we learned at Docker from containers is that a lot of applications are people doing composition of containers. Docker Compose is its name. It's in the name.
ABDEL SGHIOUAR: I was about to say that.
JUSTIN CORMACK: But a lot of people have these microservice applications that are made up of a number of components that work together. And so again, with WebAssembly we're thinking like, is that composition-- how does that composition fit into the WebAssembly model? What's it going to look like?
ABDEL SGHIOUAR: Yeah.
JUSTIN CORMACK: And I think there's a lot of learning from microservices and containers about-- it's part of the reason we're excited about WebAssembly is that we see it very much as a new piece that's in the same space.
And I think that's why there's a lot of WebAssembly at KubeCon. There's a whole another presentation tomorrow. But there's a lot of overlap between the people doing WebAssembly, especially server side, and the people doing Kubernetes and microservices, because everyone feels it's a kind of iteration on the same thing.
And that's why at Docker, we're trying to make-- again, we're still exploring these directions. But we're trying to make a kind of route for developers to be able to move into the new ecosystem when they're ready, and when it's ready for them, so that there's a kind of continuity for developers. They don't have to kind of suddenly do everything very differently.
Because I think there are these evolutionary things. There are things that we've learnt about using containers and deploying container applications at scale that we're not going to just throw away because there's a change. We're going to keep using these things, and evolve them, and learn what else we can do.
ABDEL SGHIOUAR: Yeah. I'm not saying anybody just today just decided we're just going to rewrite everything in WebAssembly. That's not the point. It's a very early space. And actually it is one of the reasons why we really want to talk to you, or to Docker in general, because when you're looking into the Wasm stuff, people are doing a lot of things, but it's not super clear in which direction they want to go.
And I think Docker with Wasm sounded to me like one of these combination of things that makes sense for developers, because one of your focuses is the developer experience. So I want to touch on one thing and we can wrap it up. Where do you see all of this fitting within Kubernetes itself? Because one project I heard about is trying to make the kubelet supports the Wasm models through containerd, where you would be able to run--
JUSTIN CORMACK: Yeah. So you can run a-- and I think it's available experimentally in Azure at the moment. It's obviously still going through iteration. But that's probably the only way you can do it.
And again, that's part of the seamless transition. You can mix WebAssembly and non-WebAssembly in the same kube cluster, and that kind of thing. So there's a very straightforward evolutionary thing.
I do get the sense that some people who think that Kubernetes might not be the best fit. I don't know. It's kind of interesting. I think that there are some people who think the kind of niche for WebAssembly is very lightweight, serverless stuff, very fast spin up and spin down, very fine-grained serverless, much more kind of AWS Lambda-style serverless rather than, say, Cloud Run-style serverless, for example.
And there were-- and I think some of those people think that currently Kubernetes doesn't spin things up fast enough, that kind of--
ABDEL SGHIOUAR: It could be slow. It really depends on the use cases.
JUSTIN CORMACK: It does depend on the use cases. But it's not-- I don't think-- my view is that that's not something that can't be fixed. The Kubernetes API is very generic. The implementation tends to be kind of slow for lots of reasons that we could spend a lot of time going into. But the API doesn't have to have those properties.
ABDEL SGHIOUAR: It doesn't have to be container focused. That's what we're saying. It doesn't have to be only--
JUSTIN CORMACK: It's like some of the-- CrashLoopBackOff happens a lot. It takes time. It's not required to happen as part of the spec. It's just a convenient implementation. But it happens to add a bit of a latency to starting containers, for example. It's like sequencing events.
And I remember when-- we kind of had this conversation ages ago, when Kubernetes was just taking off, when Mesos-- because Mesos was originally a big-- it came from scheduling world, where you spent a lot of time optimizing placement and scheduling. And everyone said that was going to be important in Kubernetes.
And it wasn't really important in Kubernetes, actually.
The developer experience was important in Kubernetes, and the operator experience, and many other things.
And so I kind of think that some of it is just those things coming around again. What's important, and what's--
ABDEL SGHIOUAR: What are we trying to solve?
JUSTIN CORMACK: What are we trying to solve for, and what's our workloads look like? And if our workloads change the shape they look like and become a couple of orders of magnitude more, much smaller processes than containers, then we can optimize. We can optimize the runtimes to support those better.
But that's not what they look like right now. And so I think there's still a lot of scope for doing work across the ecosystem just to support these different kinds of workloads. And we saw that already with things like Knative for supporting serverless-type workloads. And it's just like, maybe we need different tools in different work.
And so I think there's a lot of evolutionary change that we'll explore around the existing ecosystem, and how to move it into new things. And we'll see what kind of Wasm workloads people want to run, what kind of things they want to do.
It's the endless variety of, what kind of applications do we want to build today? How do we want to build them? How do we want to-- what kind of performance, and state management, and all the other things do we need?
ABDEL SGHIOUAR: Yeah. One thing that-- and I'm, again, here just kind of having more a futuristic look into it, is--
JUSTIN CORMACK: Because might as well be futuristic about this.
ABDEL SGHIOUAR: Yeah. Exactly.
And it's still, a lot of it is undefined. So one interesting use case I've been probably-- so Kubernetes had-- we touched on it earlier, is this problem of multitenants in Kubernetes, right? How do you ensure strong isolation between workloads if they are running on a shared infrastructure, and especially, very importantly, if you are a third party, if you are a service provider. So you are hosting third-party code you don't trust on your own infrastructure.
And so-- because, as of now, we don't run Cloud Run on Kubernetes. We're are on Borg, because Borg has a longer history of isolation compared to Kubernetes itself. So one way I'm thinking about this is, you could say to people, write your code in whatever programming language you want. Compile it towards Wasm.
And we would take your Wasm, with all the component system interfaces we've been thinking about that it needs, and put all of that inside a microVM like Firecracker, or Kata Containers, or something like that. And then we have these two layers of isolation you talked about. You have the sandbox which Wasm provides, then you have the microVM isolation layer. And that's where you could potentially have strong isolation of [? dock of ?] Kubernetes, right? That's one use case I'm thinking about.
JUSTIN CORMACK: Yeah. Which you can do with Kata Containers with containers now.
ABDEL SGHIOUAR: Yes.
JUSTIN CORMACK: And that was the whole Kata Containers story.
ABDEL SGHIOUAR: Original story.
JUSTIN CORMACK: Original story. And I think that kind of model potentially also works for WebAssembly. I think that part of it is, again, the detail of the kind of workloads and workload provider you want to be, and what the alternatives are.
I think a lot of people decide that they don't want to solve that problem. They'd rather have Google solve that problem for them.
ABDEL SGHIOUAR: Yeah. I'm talking about it from a provider perspective.
JUSTIN CORMACK: But yeah, and the providers, as you say, they tend to not-- yeah, I don't think any of the large providers do multitenant Kubernetes as--
ABDEL SGHIOUAR: As far as I know, no.
JUSTIN CORMACK: --as far as I know. But there's other aspects of securing their Kubernetes APIs as well as the VM isolation pieces that you have to take into account that are also important, because obviously the pods have access to the Kubernetes API. So you've got other attack surface other than through the VM.
ABDEL SGHIOUAR: Yes.
JUSTIN CORMACK: But again, certainly WebAssembly potentially gives you more control over how the networking egresses from the pod. But again, you've got service meshes that can do that kind of thing, as well, if you want to control traffic and make sure it's not routed into the cluster, it's always routed out of the cluster, or it's controlled in the cluster.
But I think there's a number of things to consider in terms of-- multitenant infrastructure's got a lot of things you have to consider. It is complicated. But certainly those isolation layers definitely could be part of that.
ABDEL SGHIOUAR: Yeah. Yeah. So that's kind of what I was thinking about. Well. One last thing I wanted to touch on is, what is the Bytecode Alliance? I read about it a little bit. You are a voting member, as Docker?
JUSTIN CORMACK: Yeah. So the Bytecode Alliance is really, it's part of the Linux Foundation. It's the standards organization for WebAssembly. Because WebAssembly started as a Mozilla project, but it was from very early on, as we talked about earlier, it was across the browsers, so Chrome and the Google team were heavily involved. So having a standards organization for that made sense.
And then for the non-browser parts, those also live in there as well, and the usual things around trademark, and working together in a neutral way, and making sure that everyone has a level playing field, and all the things that the Linux Foundation has done for Kubernetes as well, helping to grow it as a standard platform without anyone feeling that they can't influence the direction, and see where it's going, and those things.
So I think that part of standards is sometimes ignored. But it's often really important to have a good, solid governance behind it.
ABDEL SGHIOUAR: Yeah. Nice. Well, that actually was an interesting conversation, especially when we're talking about something that is still moving. It feels like you're basically swapping tires on a car which is driving under 20 miles on the highway. So hopefully we will have you or somebody else once there is more clarity about how this whole WebAssembly ecosystem will end up being in a few months, years, whatever.
JUSTIN CORMACK: Yeah, absolutely. It's moving fast. There's a lot going on. I'm sure there's going to be a lot of announcements and things tomorrow.
ABDEL SGHIOUAR: Oh, I'm expecting a lot.
JUSTIN CORMACK: But yeah, I think that it's an exciting area where there's still a lot of possibilities. And I think that always excites people who want to contribute to something while it's new, while there's the opportunity to kind of make your mark and lay down the groundwork for something that could go in so many directions.
And I think that there's this growing amount of excitement around the ecosystem that this is going to be an important piece of software going forward. So it's a great time to get involved.
ABDEL SGHIOUAR: Yeah. Yeah, no, it's definitely an interesting space to be involved in if you are looking to learn something new. WebAssembly is quite new to me. I've just started playing with it a few months back. Also, as you said, get involved. It's in the Linux space, in the Cloud Native space. So it kind of follows the same community model as all the other projects.
So, yeah. Thank you very much, Justin, for being with us.
JUSTIN CORMACK: Thank you. Good to--
ABDEL SGHIOUAR: It was a great conversation. And hopefully, we'll get to chat during KubeCon this week. And thank you for listening to "The Kubernetes Podcast" by Google. This is your host, Abdel. And see you next time.
I feel like a lot of folks these days when they start talking about Wasm, they just talk about what it is to them now, which we'll get to in a second. But it's kind of really evolved over time.
ABDEL SGHIOUAR: Yeah. I felt it was important to talk about the origins, because if you were not doing front-end technology for very specific use cases, you probably have never even heard of WebAssembly, right?
KASLIN FIELDS: Yeah, and the name.
KASLIN FIELDS: Yeah. Those types of concerns, not generally our area. But then this WebAssembly thing has just kind of popped up into the containers and infrastructure area from the world of front-end engineering.
And so here's what I got from your conversation about that. It sounds to me like the way that I explain containers is engineers were trying to find a way to run their code, basically to find a lighter weight packaging and isolation mechanism.
We had virtual machines. They work as a packaging mechanism and as an isolation mechanism, but they're pretty heavy. So engineers were trying to find something lighter. They came up with containers, which provide packaging and isolation using cgroups and namespaces, and it's basically processes in Linux that just have special cgroups and namespaces associated with them to provide that packaging and isolation.
ABDEL SGHIOUAR: Yeah.
KASLIN FIELDS: So containers are already pretty lightweight, with it being just processes, essentially. But it sounds like from your conversation, Wasm is kind of taking that concept of trying to create a lighter weight form of packaging and isolation and applying a very different view on it.
You all mentioned the conversation about it being more similar to a JVM in concept.
ABDEL SGHIOUAR: Exactly.
KASLIN FIELDS: It was very interesting to me.
ABDEL SGHIOUAR: I did some Java development a long time ago. The JVM-- and this is something that you only hear if you talk to Java people when they talk about the JVM, the Java Virtual Machine. I'm not going to pretend to be an expert in Java.
Just my recollection of how it used to work is you write Java code, you compile it into what we call bytecode, which is not binary but not code, it's in between. And then the JVM is the runtime that the JVM itself is dependent on the architecture where you run your application. So there is a JVM for ARM, there is a JVM for x86, there is one for whatever.
And then your bytecode will be portable across all the architectures because the runtime is the common part, which is the JVM. And that's where Justin was bringing that analogy from the Java world, by saying basically, what WebAssembly is, it's kind of like a virtual machine or a runtime.
Because if you read about WebAssembly, today you have about 40 languages that can actually support compiling a target Wasm app. So when you compile your app, when you build it, you can set the target environment to be Wasm. And it will produce a Wasm app.
And then you can take this Wasm app and put it wherever a WebAssembly runtime is available, and then you can run it. And when you really dig into WebAssembly, it is OS-independent or OS-agnostic, because basically what you have is like ARM, x86, right?
So you compile your app to run on ARM. And to run on ARM, it doesn't matter what it is. Windows, Linux-- doesn't matter, right? And same thing for x86. So that's kind of the analogy between the JVM world and the WebAssembly world.
KASLIN FIELDS: Ah. That makes sense. Of course, in the container world, there's been a lot of conversations about ARM, and x86, and your underlying architecture, but really the way that we think about containers, we usually kind of stop at the operating system level, because that's where the cgroup and namespace components and the processes all live.
ABDEL SGHIOUAR: Correct.
KASLIN FIELDS: So at least when I think about containers, that's about as deep as I usually go. But it sounds like Wasm is kind of taking that thinking a step further on how you get from your application that you have written all the way down to the machine code.
ABDEL SGHIOUAR: Correct.
KASLIN FIELDS: And it's kind of changing that process. All right. Interesting.
ABDEL SGHIOUAR: I think Wasm is probably as close as we're going to get to the promise of "write once, run everywhere."
KASLIN FIELDS: Interesting.
ABDEL SGHIOUAR: Because containers was supposed to deliver that, but they don't. Because you still have to--
KASLIN FIELDS: But it's kind of at the operating system level, right?
ABDEL SGHIOUAR: Correct.
KASLIN FIELDS: It's different on Windows versus Linux, because you've got to rely on the cgroups, namespaces-esque concepts within the operating system.
ABDEL SGHIOUAR: And it's different on x86 and ARM because you still have to build toward a specific target architecture. Docker has the multi-arch support, which you can do one image that can run on both. But this is kind of taking it a step closer toward, I don't care, this is a WebAssembly runtime, so I'll be able to run my app there. Right?
KASLIN FIELDS: Interesting.
ABDEL SGHIOUAR: And then there was the whole conversation about the sandboxing, which I think is one of the main strengths of the technology.
KASLIN FIELDS: Yeah. I was going to ask about that, as well. So that makes sense from a, here's how this packaging mechanism is different from what containers have done, and might be very interesting. What about the isolation piece?
ABDEL SGHIOUAR: Yeah. As Justin said in the interview, the browsers, browsers in general, are the best sandboxing technology that exists today. Basically when you are doing things in your browser, tabs are isolated into individual sandbox processes. And one tab cannot access another tab. And if you want to give that access, it's explicit in-- you have to ask for specific requests or specific permissions. So it's not open by default.
So an example is when you are trying to share your screen on a Meet, on a meeting, whether you're in Zoom or whatever. You try to share your screen, there is actually a standard set of APIs that allows the application, which is the Meet application, or the Zoom, or whatever streaming app decide which we're using right now-- it has to implement certain APIs to be able to get access to the tabs it wants to share.
So that sandboxing environment, sandboxing technology is actually one of the strong points of Wasm that people are trying to bring into backends, or cloud, because it has obvious benefits.
I think-- just taking a step back a little bit here. It's very interesting. Something just popped in my head. So containers does the packaging stuff. And they do isolation up to a certain level. Good.
Now, although containers don't do full isolation-- they do a little bit of isolation-- cloud providers, like us, like Amazon, like all the other ones, when they allow you to run apps on their infrastructure, they do not use containers. Amazon had to write a microVM, Firecracker, to be able to support Lambda, which is not a full VM and not open container, something in between.
So I think that the sandboxing technology that Wasm allows is a very obvious feature that a lot of people need, at least from the cloud provider's point of view. Allow people to run random code in your environment, that's one of them.
Or if you care about isolation on your infrastructure between your tenants, for example, you still want to use the infrastructure shared between a lot of tenants, but you want to have strong isolation between them. So despite it being a lot of undefined and open question stuff that we have discussed in the interview, it is a technology that has a lot of good stuff in it.
KASLIN FIELDS: Hmm. A lot of promise here.
ABDEL SGHIOUAR: Yes.
KASLIN FIELDS: I think there's-- like you said, the next several months of development in the Wasm world will be very telling. I look forward to it.
ABDEL SGHIOUAR: Exactly. And there is a conference that we have covered the news, in September, Bellevue, Washington. So there is a WasmCon. I think it's going to be probably the first-- it's the first edition. Because they used to do Wasm Day-- or they did Wasm Day in KubeCon EU this year.
KASLIN FIELDS: Yeah. I definitely think I'm going to try and check that out.
ABDEL SGHIOUAR: Yeah. And there is quite a lot of things going on. Just go ahead and Google Wasm and you will be flooded with the amount of stuff that pops up.
One other thing, also-- this was interesting to me, at least. And this is actually-- this is funny, because I had this conversation with Justin back in April. And then we went on, and we did a lot of other stuff, and things like that. And last week, I was in south of Sweden. And I was having a meetup.
So I ended up meeting a person who open sourced and maintained a tool which-- what it is, essentially, it's a distributed caching tool for Kubernetes.
So the way it works is that it installs itself as a daemon set on all your nodes in the cluster. And it caches images. And it acts as a proxy. So when a node is trying to pull an image, it will proxy through that tool, and that tool will pull the image from another node if it's available there, before it goes upstream to the container registry.
It's using containerd. And so he was like, it's using containerd. And I was like, I've been seeing a lot of things using containerd. And then I remembered that Justin said the thing, that the containerd is just a generic runtime environment.
KASLIN FIELDS: Yeah. Everything uses containerd.
ABDEL SGHIOUAR: Which I haven't thought about. It's kind of funny.
So I just mention this because it's not a topic I want also to-- there are a few people that we would like to have on the show to talk about containerd, actually.
KASLIN FIELDS: Yeah. We'll have to look forward to that in the future.
ABDEL SGHIOUAR: Sounds good.
KASLIN FIELDS: That brings us to the end of another episode. If you enjoyed this show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod or reach out to us by email at <kubernetesPodcast@google.com>.
You can also check out the website at KubernetesPodcast.com, where you'll find transcripts, and show notes, and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show.
Thanks for listening, and we'll see you next time.