#238 October 1, 2024
Marvin Beckers is a Team Lead at Kubermatic and a contributor and maintainer of the CNCF Sandbox Project KCP. KCP is an open source horizontally scalable control plane for Kubernetes-like APIs.
Do you have something cool to share? Some questions? Let us know:
ABDEL SGHIOUAR: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Abdel Sghiouar.
KASLIN FIELDS: And I'm Kaslin Fields.
[MUSIC PLAYING]
ABDEL SGHIOUAR: In this episode, we speak to Marvin Beckers. Marvin is a team lead at Kubermatic and contributor and maintainer of the CNCF Sandbox project KCP.
KASLIN FIELDS: But first, let's get to the news.
[MUSIC PLAYING]
ABDEL SGHIOUAR: Docker announced their subscription prices are increasing. Starting November 15, Docker Pro is moving from $5 to $9, and Teams is moving from $9 to $15 per month. The price for Docker Enterprise remains the same.
KASLIN FIELDS: The Linux Foundation announced intent to form a Developer Relations Foundation, or DRF for short. The new foundation aims to address some of the challenges DevRel organizations face across the industry, like lack of clarity, recognition, and the difficulty of measuring impact. The DRF will partner with industry leaders to address these challenges and already received support from multiple companies.
ABDEL SGHIOUAR: NetApp INSIGHT took place September 23 to 25 in Vegas. NetApp CEO George Kurian emphasized the importance of data for AI workloads. The company is integrating AI functions into its storage platform, enhancing interoperability to eliminate data silos and optimizing storage for cold data. Partnerships with hyperscalers like AWS, Azure, and GCP, and NVIDIA, are highlighted as key to delivering AI solutions across diverse workloads, emphasizing cost optimization, security, and sustainability in data storage.
KASLIN FIELDS: And that's the news.
[MUSIC PLAYING]
ABDEL SGHIOUAR: Hello, folks. I'm today with Marvin Beckers. Marvin is a team lead at Kubermatic and a contributor and maintainer of the CNCF Sandbox project KCP. Welcome to the show, Marvin.
MARVIN BECKERS: Thanks for having me.
ABDEL SGHIOUAR: So before we started recording you told me that KCP does not stand for Kubernetes control plane, which is the thing that I was-- that's how it was described to me the first time I heard about it. So what is KCP?
MARVIN BECKERS: So it's not an official acronym, but I think the description is kind of spot-on. So it's a cloud-native control plane for Kubernetes-like APIs. So you get the idea, even if it's not what KCP stands for.
ABDEL SGHIOUAR: Got it. OK. A cloud-native control plane for Kubernetes-like APIs. So then the next question is, what is actually KCP? What do people use it for?
MARVIN BECKERS: KCP is basically the idea that the Kubernetes resource model, all the engineering that went into the Kube API server, it has become more useful than just using it for container orchestration. So it's a decade now of engineering and technology that went into it, and KCP is basically the idea that this is extremely useful for hosting any sort of API.
And I think in the ecosystem, we see this already with a lot of projects that don't really use it for container orchestration. They use it as a control plane. If you think of projects like Crossplane, for example, that uses Kubernetes as a control plane and builds on top of the API, not necessarily on top of the container orchestration part.
ABDEL SGHIOUAR: So then the next question becomes, why would people use KCP as a control plane instead of just using Kubernetes?
MARVIN BECKERS: KCP has several features built in that are well-primed for this specific use case. So because KCP thinks about APIs and not primarily about container orchestration, a KCP has ways to manage APIs at scale. And KCP also has a multi-tenancy concept that you don't find in Kubernetes which is called logical clusters, and logical clusters basically allow you to serve multiple Kubernetes API endpoints, or something that looks like a Kubernetes API endpoint, from the same KCP instance. So this way you can talk or you can get a lot of Kubernetes API server endpoints with very little cost.
ABDEL SGHIOUAR: Interesting. So you mentioned control plane as an example. And that was one of the-- I mean, KRM, I've seen KRM before, the Kubernetes Resource Model. And when you come into the context of anything that does something like Crossplane, then the question that naturally people ask is, do you actually need to run-- and I think you answered this question partially, but do you actually need to run the entire Kubernetes control plane with the API server and scheduler and all that stuff just to run Crossplane? And it seems to me like KCP is the answer to this question. If you don't need the container orchestration part, if you don't need the container runtime part, you could just run KCP, which gives you the control plane part, right?
MARVIN BECKERS: Yeah, that's exactly the idea behind KCP. It's basically Kube without the container orchestration.
ABDEL SGHIOUAR: Interesting. And then you have the multi-tenancy part, which basically-- I mean, I'm assuming here, correct me if I'm wrong-- is that only for cloud providers trying to offer things on top of KCP, or would that be useful also for end users?
MARVIN BECKERS: I mean, we consider KCP a foundational technology. We sometimes call it a framework. But it's not like a programming framework, so that sometimes confuses a bit. But basically, it's an API server. It's a control plane. So what you use it for is up to you to build it, or what you want to build with it. So for example, if you built a SaaS, like what you say, a cloud provider or a SaaS provider, they could start with KCP with a strong multi-tenancy setup and go from there.
We could also think of this in terms of being useful for things like platform engineering, where basically you're not a cloud provider but you're building an internal platform for developers or other people in your organization to use. So this is two of the several use cases that you could bring KCP into.
ABDEL SGHIOUAR: Right. Interesting. OK. And I just forgot to say this at the beginning of your introduction-- if the sound looks a little bit weird compared to how people are used to hear it, it's because we are actually recording at a conference. So we are here at Container Days Hamburg. It's my first time at this conference. It's pretty cool. We can probably talk about it at some point, maybe, if you want.
But let's just stay on the KCP topic. So I went to the documentation, I did some reading, I read some terminology, and it's a little bit confusing. [LAUGHS] But it's something you would expect from, as you say, a building framework, like something that is meant for people to build stuff on top of. So let's go through these basic concepts. What is a logical cluster? Let's start there.
MARVIN BECKERS: Yep. So a logical cluster, it's basically a way to partition an API server. I'm not going to say Kube API server, because it's not exactly the Kube API server, but something built on top of that. It's a way to partition basically the storage layer, so etcd in most cases, and your access to the API server into multiple ones. So basically each logical cluster that you talk to, it has its own space in etcd in the key value store. And it has its own endpoints that you can talk to, and it acts like a single standalone Kube API server.
ABDEL SGHIOUAR: Even if behind the scenes, it's only one instance of KCP?
MARVIN BECKERS: Exactly.
ABDEL SGHIOUAR: You would be able to expose as many endpoints using logical clusters because there is one endpoint per logical cluster, right?
MARVIN BECKERS: Yeah, exactly.
ABDEL SGHIOUAR: And then how is the multi-tenancy at etcd done? Do you add a value to the-- how is that done?
MARVIN BECKERS: Yeah. So there's a prefix that is, I think, prepended, and that basically includes, OK, which logical cluster is this data related to? And then the API server, KCP, it takes that information and returns different data depending on which logical cluster you're talking to.
ABDEL SGHIOUAR: Got it. Got it. And then does that mean that every logical cluster implements the full permission model of Kubernetes?
MARVIN BECKERS: Yeah. So each logical cluster has an adjusted authentication and authorization layer of Kubernetes that we all know. But because each logical cluster has its own objects, that means that in each logical cluster you will have different airbag being applied to your request. So the cluster roles are-- they're cluster-scoped, in normal Kubernetes terminology. And in this case we could call them workspace-scoped. So cluster roles are different from logical cluster to logical cluster, and that basically gives you different permissions depending on what logical cluster you're talking to.
ABDEL SGHIOUAR: Got it. Got it. Interesting. And so then you also have a concept of physical cluster. So what is a physical cluster, in this case?
MARVIN BECKERS: Yeah. So this is something that I think we want to talk a little bit about, because KCP at some point had a bigger scope. It was looking more at solving the multi-cluster scheduling use case. Since the project is part of the CNCF Sandbox, we have decided to scope down a little and focus on the control plane API parts.
And so in KCP physical clusters-- and I guess to some degree, they still do-- it's Kubernetes clusters that you can run workloads on. So that is basically the idea because KCP is only a control plane. And because we took away the container orchestration part of it, if you want to execute software in the Kubernetes model, you still need something powering that.
So in this old component of KCP that at the moment is frozen, that is called transparent multi-cluster physical clusters, where your workload clusters, where you were scheduling your multi-cluster deployments or something that were then spread around multiple clusters, that is no longer the case.
But of course, you still need to think about physical clusters, because KCP is only, again, a control plane. So that means if you want to run software against it, and you need to have your own controllers, your own operators, maybe then, of course, you still need Kubernetes clusters that power that. So in a sense, physical clusters are not a formal term anymore but you still need to think about them.
ABDEL SGHIOUAR: Got it. And so you would-- I mean, I'm talking about the previous scope. You would still run KCP, create logical clusters, which will give you these separated control planes, let's say, or endpoints, and then you would make a physical cluster connect to a logical cluster, right? Well, make a physical set of clusters-- and by physical means virtual machines or whatever. They will dial up to the endpoint, which is part of a logical cluster.
MARVIN BECKERS: This is basically one of the patterns you can implement. So because in KCP all we think about is APIs, but if you define an API where you would say, OK, I want something to happen, and most APIs are doing something. And then you can basically implement, I don't know, a sinking component or controller that takes the information from the central control plane that you're running with KCP and distributes that to physical clusters.
And that can also be higher-level resources. We don't necessarily need to create deployments or something in KCP. We can say, for example, let's say you want to implement providing databases. So then in KCP you would create a database API, you would create that, and then it would be synced down to physical clusters where the actual database instances then would be started.
ABDEL SGHIOUAR: Got it. Does that mean you could-- I mean, this is a hypothetical example. You tell me if I'm hitting on the right nail here. Would that mean that you could potentially use KCP if you are a cloud provider that runs managed Kubernetes? So you could use KCP as the control plane to create user clusters.
MARVIN BECKERS: I would think that's definitely completely in scope for it, yeah.
ABDEL SGHIOUAR: Using something like the Cluster API, for example, you would run KCP, run the Cluster API on it. And then when a user goes, give me a cluster, then that becomes a YAML file that you submit to KCP, which then provisions a Kubernetes cluster with all its components.
MARVIN BECKERS: Yeah. So usually implementations, they need to be KCP-aware, so you can't run stock Cluster API against it. But from the principle idea, yes, exactly. That is how you could use KCP as a central control plane and let it handle everything and sync it down to the workload clusters.
ABDEL SGHIOUAR: Yeah. The reason I was asking this is because it's a pattern that is very common, actually, in all the providers that does some form of Kubernetes control, some form of managed Kubernetes. Even on-prem Kubernetes, it's the same idea. You have a control plane which then provisions other control planes and worker nodes. And so how are logical clusters different than virtual clusters?
MARVIN BECKERS: So I think that depends a little bit on your definition of what a virtual cluster is. So usually, I think this refers to a concept also known as hosted control planes, where you take a subset or maybe the full Kubernetes control plane and put it in pods as just another workload on another Kubernetes cluster.
But of course that means maybe they even share data store, but you need to spin up a new API server. You need to maybe spin up new etcd for each of them, for each virtual cluster. And this is more efficient than, of course, provisioning three nodes and make them the M3 nodes control plane pattern. But still, logical clusters, because they only exist in the data store and the API server does the separation, you're not spinning up any new processes, and you're not spinning up any new data stores.
ABDEL SGHIOUAR: Got it.
MARVIN BECKERS: Even hosted control planes, they can come up pretty fast.
ABDEL SGHIOUAR: Yeah, because they're pods.
MARVIN BECKERS: Yeah. But a logical cluster in KCP takes a second or so to be there and ready.
ABDEL SGHIOUAR: Yeah. So that's actually a very, very important distinction. Because in typical hosted control plane virtual clusters implementation, the control plane is just pod that runs the entire control plane stack, which has the API server, the controllers, and everything. But in the case of KCP it's essentially-- I've seen it in the documentation. It's like a YAML file that you create, but it doesn't really spin up anything. It just separates etcd storage layer and then implements some sort of authorization authentication on top of that, essentially.
MARVIN BECKERS: Yeah, exactly. It just takes basically what you give as a name, as a partition key, puts that on the storage layer. And then it's basically done. Provisioned, I don't know, five, six resources, and yeah, then you're done.
ABDEL SGHIOUAR: Awesome. Because you talked about workspace, what is workspace?
MARVIN BECKERS: Yeah. So workspace is a higher-level resource for managing logical clusters. So a workspace basically models as a Kubernetes API that you are provisioning the way that you want to. So in a workspace, when you create a workspace, you get a logical cluster. And workspaces are the user-facing resources, because they are logical clusters that don't have workspace representation. They are like the system clusters, something that the administrator would interact with, but not a normal user. So users primarily create workspaces and then get logical clusters, so they get a Kubernetes API they can talk to.
ABDEL SGHIOUAR: Yeah, it's just a way of organizing resources, essentially. Cool. And then there is this other concept-- and I'm still going through the concepts from documentation-- API exports and API bindings. What are these?
MARVIN BECKERS: In Kubernetes, when you create a CRD you do basically two things at once. So first of all, you define a schema for the API resource, but you also register it with the API immediately. So you have the schema definition and it gets immediately registered. But these are two steps, and KCP separates them. So in the API export a service provider, like someone who wants to provide an API, they would define a schema of the API, but this schema would not become available as an API resource. So it's just the schema, basically. It's just, OK, here is an API that I am offering.
ABDEL SGHIOUAR: Got it.
MARVIN BECKERS: And the API binding is then the consumer side where the consumer says, OK, I want to bind this API because I want to have it available within my workspace as a resource that I can create objects from.
ABDEL SGHIOUAR: Got it. So you basically separated the way Kubernetes does CRDs into two separate steps. You would create the schema first, and then to make it available, you have to create the API binding in the workspace to be able to consume that API.
MARVIN BECKERS: Exactly. And that's why you can also convert the CRD very easily to a resource schema in KCP.
ABDEL SGHIOUAR: Is that done for-- this implementation, is it for security reasons, or for efficiency reasons? Why was it implemented this way?
MARVIN BECKERS: It's more so that API consumers can mix and match which APIs they want to have available, because in Kubernetes, when you create a CRD on a cluster, that is the API resource that you have available. And usually it can only be provided by a single provider, like a controller, an operator that you start. And in KCP you can do this mix-and-matching, and you can also consume APIs from different providers. You can say the same API is provided by two, let's say, teams in your organization. And you want to choose which of the teams should be providing that, and then you select one of the API exports, basically.
ABDEL SGHIOUAR: Got it. I mean, it's hinting toward platform engineering in a way that you could-- it's kind of like having a catalog of APIs available through the exports. And then you can bind your workspace to an API to be able to use that specific API, essentially.
MARVIN BECKERS: Yeah, exactly.
ABDEL SGHIOUAR: Got it. Cool. Awesome. So we talked briefly about stuff that KCP can do, examples and stuff, but I want to go back to that concept of multi-tenancy. Why is multi-tenancy-- I mean, I don't know if it's easier or more complicated, but how does KCP solve multi-tenancy that cannot be solved with traditional Kubernetes?
MARVIN BECKERS: So we talked about this earlier. With virtual clusters you could spin up a Kubernetes API for everyone, each team that wants to work on their own, want to be self-sufficient in a way, and want to do self-services. But that is quite costly, and because KCP workspaces are basically-- they're very cheap, they're easy to create.
And because you have these interactions between workspaces-- so I can create an API export in one workspace where I, as a service provider, I own the workspace and service consumers in their workspaces which they own, which they have full permissions to. They can create the API bindings and consume APIs across these boundaries. So teams are basically using different Kubernetes API endpoints, but they still share the system from which they can consume APIs from the service catalog that you mentioned.
ABDEL SGHIOUAR: Yeah. So it sounds like it gives you the option to reuse APIs, while if you would do something like virtual cluster, then you don't have that option. If there is a CRD, you need to install it in your own control plane. Right. Cool.
And also, the other thing that is now coming to mind is it makes-- I mean, multi-tenancy is not only about separating the worker parts or the data plane, if you want. It's also about separating the control plane, which is, as you said, it's cheap in KCP because you're literally just creating a YAML file. You're not spinning up anything. So how does KCP itself relate to the Kubernetes upstream control plane? There is a cap here that you mentioned, 4080. Can you talk about that?
MARVIN BECKERS: Yeah. So KCP is built on mostly Kubernetes code, so where we're using large parts of that. And during development of KCP in the last few years, it became clear that in Kubernetes there is Kube API server, and this has all the container orchestration parts built in.
This cap is basically more or less about code organization, where there is a step before the Kube API server, which is like the generic control plane. And basically the Kube API server is now built on this generic control plane, so it takes this generic one and adds the container orchestration bits to it.
ABDEL SGHIOUAR: Oh, got it. OK.
MARVIN BECKERS: So if you want to use the Kubernetes resource model and the pod Kube API server makes Kube API server, you can take this and build your own API rel component. So we've been trying to work with upstream to clear out the codes and make sure that it gets this generic control plane available. But we think that if you have this generic use case and KCP is maybe too much for you, then hopefully this will now enable community members to build their own control planes based on the Kubernetes resource model.
ABDEL SGHIOUAR: And when you mentioned that this cap is all about code organization, does that mean that up to before this proposal was implemented or is being implemented, that does mean that in Kubernetes itself, the control plane plus the scheduler and all the other components, they were tightly coupled together?
MARVIN BECKERS: Not the scheduler, but within the Kube API server previously the container orchestration bits, like the core APIs and all of that, were all built in. Not the best one to speak about this, but if you take the earlier code out of it that just initializes the Kubernetes resource model and all that, but without registering the container orchestration bits, you have now the separation in the Kube API server where there's this prototype of this generic control plane, and then the Kube API server built on top of that.
ABDEL SGHIOUAR: Which then has the container orchestration built in. OK, so it sounds like separating Kubernetes itself from Kubernetes to make it possible to spin up the generic control plane, if you just need that, or run standard Kube API server plus container orchestration plus whatever. Cool. That's more like code organization.
All right. So we talked about this briefly, and I mentioned that when you were talking about this concept of consuming APIs across workspaces, it sounded to me like platform engineering in a way. I always call it platform engineering. How do you see KCP used today in the context of platform engineering? Do you have actual use cases that you have seen before?
MARVIN BECKERS: I think we see these use cases out there every day. With all these projects, like we talked about them earlier, a lot of platform engineering at the moment is based on extending the Kubernetes API with useful APIs. And I think this is exactly the use case that KCP fits in, and centralizing this, because usually I don't think you can have one central Kubernetes API. That's not what it was meant for, right?
But you can build one central KCP control plane which then everyone can go to, have the bits of platform engineering in there. So say, OK, I want to create databases from it, I want to create infrastructure as a service from it. And from there it gets scheduled where it needs to be or where it needs to go. But yeah, I think what we're seeing is platform engineering has already taken all the benefits of the Kubernetes resource model.
ABDEL SGHIOUAR: Yes.
MARVIN BECKERS: And probably is more suited with something like KCP because it's meant for this use case.
ABDEL SGHIOUAR: Got it. Yeah, one of the ways I could see KCP being used in the context of platform engineering is if you need to run, for example, a central cross plane, just an example, you would just run it on KCP. You don't need Kubernetes for that. You could just KCP control-- Crossplane and then that's it. And then people can do whatever they want.
MARVIN BECKERS: I think that would be the idea, yeah. So the controllers usually need to be KCP-aware. So you would need to run a Crossplane, a version of Crossplane that is KCP-aware. But in principle, yes, that would be the idea.
ABDEL SGHIOUAR: Yeah. I was talking to somebody from Kubermatic and they were mentioning, without mentioning a name, a company, a nameless company that does hardware, and they were thinking about using KCP inside their hardware for stuff. [LAUGHS] Can you talk about that? Do you an idea about that?
MARVIN BECKERS: I think in general, it's fair to say that we've been talking-- the maintainers have been talking about KCP for quite a while. I joined the maintainers last year and have also been to conferences and talking about it. And I think what we're seeing is more interest because more and more organizations are realizing, hey, the central control plane could actually help us consolidate what we're running here. And so I think there is a need for that and a lot of companies and organizations are looking at it. And yeah, we think that that hopefully will result in official adoption at some point.
ABDEL SGHIOUAR: Got it. Got it. Interesting. OK. Cool. So the last question would be, how can people contribute to KCP? It's a Sandbox project, right?
MARVIN BECKERS: Yeah, exactly.
ABDEL SGHIOUAR: So where can people find you? Are you looking for contributors?
MARVIN BECKERS: Obviously we're looking for contributors. There is a website, kcp.io, which is the project website from which you can find the documentation. You can find also our biweekly community meeting, which is happening on Thursdays right now, where everyone is welcome. We really like to see new faces and see them join and introduce themselves, and maybe talk about how KCP can be useful to them.
We have a Slack channel on the Kubernetes Slack, so that's kcp-dev. So we also love people joining there and talking to us. I think that's the main part at the moment. Our code is obviously on GitHub. And so we are very much looking for contributors, contributors to the code, contributors to documentation, because obviously I think we found out that this is a fairly complex topic to talk about. So the more documentation and the more we help people get into it, the better.
ABDEL SGHIOUAR: Awesome. And you are going to be at-- I mean, the project is going to be probably at KubeCon North America?
MARVIN BECKERS: Yeah, there's going to be one of the CNCF project kiosks, I think they're called.
ABDEL SGHIOUAR: Yes. In the project pavilion, but yeah.
MARVIN BECKERS: Yeah, exactly. And there will be some maintainers from KCP there.
ABDEL SGHIOUAR: Awesome. So then people can go find the maintainers and talk to them. Thank you for being on the show, Marvin.
MARVIN BECKERS: Thanks so much for having me.
[MUSIC PLAYING]
KASLIN FIELDS: Thank you very much, Abdel, for that interview. We both are familiar with another employee of Kubermatic, Mario Fahlandt, whose last name I probably butchered right there but who I work with closely in open source, and I know that you've interacted with at events at least, and things before. So I was excited to hear about this topic, which is also something that I think Mario works on. So tell me a bit about it.
ABDEL SGHIOUAR: Yeah, I should start by saying thank you to Mario, because Mario provided, actually, the hardware. So we recorded the episode at the Container Days Hamburg. For the background, Hamburg is a city in Germany, and that's where Kubermatic is originally from. That's where the company was created originally. And so the conference is run by Kubermatic, and we were chatting with Mario and he said, yeah, I'll just bring all the recording equipment, and you can just use them. So he provided all the equipment. I didn't have to bring anything with me. And he also provided the guest, so.
KASLIN FIELDS: Thank you, Mario.
ABDEL SGHIOUAR: Thank you, Mario. [LAUGHS] So yeah, that was pretty cool. First of all, if you hear any background noise in the audio itself, if you have heard any background noise in the audio up to now, it's basically because we recorded in open space in a conference. So that's the background there. So it's not our typical recording setup. But no, we had a super interesting conversation.
So I heard KCP before. I was under the impression it means Kubernetes control plane, but it doesn't.
KASLIN FIELDS: Ah.
ABDEL SGHIOUAR: Yes. It clearly doesn't, and they are emphasizing, it doesn't mean Kubernetes control plane. It is KCP. That's it.
KASLIN FIELDS: OK. It doesn't particularly stand for anything, huh?
ABDEL SGHIOUAR: No, it doesn't stand for anything. It might have started as KCP at some point, but they don't intend it to mean-- because it is a universal control plane. It's not only a Kubernetes control plane, essentially.
KASLIN FIELDS: So it's a universal control plane for other types of platforms as well as Kubernetes?
ABDEL SGHIOUAR: It's universal in the sense that you can use it as a control plane to run Crossplane, for example. And then what you would use to control with it, or with Crossplane, is not necessarily Kubernetes. You would use Crossplane to spin up all sorts of cloud resources. So it's universal in that sense that it's not necessarily just for Kubernetes. You can use it for anything else, technically.
KASLIN FIELDS: Remind me what Crossplane plane does.
ABDEL SGHIOUAR: Oh, yeah. Crossplane is this open-source tool that allows you to use Kubernetes resource model, KRM, to declare cloud resources, and then it will spin them up for you. So you could define a YAML file that defines, for example, a Cloud SQL database or a Pub/Sub topic or something. So instead of using infrastructure as code like Terraform or Pulumi, you--
KASLIN FIELDS: You use KRM.
ABDEL SGHIOUAR: Yeah, exactly. Exactly.
KASLIN FIELDS: All right. So it's creating a control plane that uses a Kubernetes-style API to manage objects?
ABDEL SGHIOUAR: Correct. Yeah, that's a very good way of putting it, yes.
KASLIN FIELDS: OK. [LAUGHS]
ABDEL SGHIOUAR: And then if you write your own operators and CRDs, then you can use it because that's just standard Kubernetes stuff, right?
KASLIN FIELDS: Right. So this is a Sandbox-level project with CNCF, and just became-- how long has it been a Sandbox project?
ABDEL SGHIOUAR: I don't remember exactly, but very recently.
KASLIN FIELDS: OK. Yeah, that's what I thought.
ABDEL SGHIOUAR: Apparently they are on their way to-- not graduation. What's the thing before-- incubating. Yeah. So they are on their way to incubating.
KASLIN FIELDS: Yeah. So at this Sandbox stage, usually a Sandbox project within the CNCF is kind of just getting its legs under it from a maturity standpoint. They're establishing, of course, what the project is, working on building the resources so that it's easy for more folks to join in and contribute, working on getting more users.
So once you reach the incubating stage, you are usually starting to branch out in terms of recruiting more contributors, hopefully from other companies than the original founding company of the project, since most projects do come from some kind of company. And then you are starting to have a few users that you can refer to. So did he talk at all about the journey that they've been through so far?
ABDEL SGHIOUAR: I mean, the project started from within Kubermatic, because they have their own platform that they-- basically, they provide a platform to be able to spin up Kubernetes on prem, or on other cloud providers. But there was reasons why they didn't want to mention some names-- I mean, they didn't want to mention them on the recording. I know some big companies that are actually using KCP internally.
KASLIN FIELDS: Usually with Sandbox projects, it's kind of like that. Like, a few companies are testing the waters on it. They don't want to call it out until they're confident in it.
ABDEL SGHIOUAR: Correct. But from what I understood, they are pretty confident in the maturity of the project itself, or the projects and the maturity of the community. So they are--
KASLIN FIELDS: Of the technology, right?
ABDEL SGHIOUAR: Or the technology as well. So they're very confident that they are on the right path toward incubating, at least.
KASLIN FIELDS: Cool. I look forward to seeing more about it. As you can tell, I haven't listened to the interview yet, so I'm going to have to go and do that after this. But thank you so much, Abdel, for conducting that interview. I'm excited to hear more about this Sandbox project in the CNCF.
ABDEL SGHIOUAR: Thank you.
[MUSIC PLAYING]
KASLIN FIELDS: That brings us to the end of another episode. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media @kubernetespod, or reach us by email at <kubernetespodcast@google.com>
You can also check out the website at kubernetespodcast.com, where you'll find transcripts, show notes, and links to subscribe. Please consider rating us in your podcast player, so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.
[MUSIC PLAYING]