#109 June 24, 2020

Kubermatic, with Sebastian Scheele

Hosts: Craig Box, Adam Glick

Last week Loodse, the makers of the Kubermatic Kubernetes Platform, made that platform open source, and rebranded their company to match. Co-founder Sebastian Scheele joins us to explain how the company and platform came about, why they’ve made their changes, and what exactly a Loodse was anyway.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

ADAM GLICK: Hi, and welcome to the Kubernetes Podcast from Google. I'm Adam Glick.

CRAIG BOX: And I'm Craig Box.


You know you've made it when your name appears in an Apple WWDC keynote. Not our name, unfortunately, but Docker was up there. Apple are currently moving their CPU architecture from Intel to ARM and they called out Linux and Docker as being two things that will be either emulated or virtualized. They haven't quite made it clear how it will work, but you will still be able to use Docker on the magic new Macs when they come out.


CRAIG BOX: It does, however, make me wonder which way that's going to push the industry if we all now have ARM desktop machines running the quote-unquote "Apple silicon." Does that mean that the data center industry will move towards ARM as well? Or will Intel maintain its hold there, and we'll have to build for a different architecture than the one we're using locally? Which was kind of the niceness about Docker-- you can still write once, run anywhere, but you knew that you were using the same architecture underneath.

ADAM GLICK: I believe the goal was to basically have an abstraction layer so what was underneath didn't matter. So hopefully, that will be transparent to all of us who are using them and don't want to ever go back to our days of assembly coding.

CRAIG BOX: Perhaps. It sounds puzzling.

ADAM GLICK: [LAUGHS] Speaking of puzzling, my latest little game that I've picked up-- there's a game called Tick Tock, which is not the video network that is popular right now, but is actually a mobile co-op game. And so you can play with one other person and you each get a different experience. You're each looking at a different set of things and you need to talk to each other, because what you have are actually the clues to solve the puzzle on the other person's screen.

It's a neat little indie game that someone had written. It's a lot of fun. It's a quick one; takes about 2 and 1/2 hours to play through the complete thing and they've done a really nice job with it. So I've been enjoying that one.

CRAIG BOX: Like a slightly smaller version of Keep Talking and Nobody Explodes?

ADAM GLICK: Yes, it's similar to that or to Spaceteam, if you've ever played Spaceteam, where you can all sit together and you're all pretending to be the control panel of a spaceship that is slowly falling apart. And you have to tell other people what to do and they flip the switches to keep the ship alive.

CRAIG BOX: I look forward to the days where we can all sit together again.

ADAM GLICK: Indeed. Let's get to the news.


ADAM GLICK: German Kubernetes vendor Loodse has changed its name to Kubermatic, the name of their primary software product. They've also made the Kubermatic product open source and you can learn more about this announcement in today's interview.

CRAIG BOX: A number of announcements from this week's HashiConf. Nomad, Terraform, and Consul all received updates, and HashiCorp announced a new cloud platform. The stated vision is to provide a suite of managed services to deploy any HashiCorp product on any cloud provider and to enable cross-provider clustering. They launch with a single service, Consul, and a single provider, AWS, in preview.

ADAM GLICK: Weaveworks has released version 1.0 of their open-source release automation tool, Flagger. Flagger implements a control loop that gradually shifts traffic to a canary deployment while measuring key performance indicators like HTTP request success rate, request average duration, and pod health. Based on thresholds set by the user, the canary deployment is either promoted or aborted, and its analysis is pushed to a Slack channel. Congratulations on the milestone.

CRAIG BOX: Open Match, a Kubernetes based platform for assigning players to game servers based on things like latency and skill, has also reached 1.0 this week. The platform is a collaboration between Google Cloud and game engine company Unity. GG.

ADAM GLICK: Many graduations are happening online these days, and container repository Harbor is no exception. The CNCF have announced the project has reached the graduated stage, with 83 contributing organizations and over 300 contributors. Congratulations to the Harbor team for achieving this milestone.

CRAIG BOX: Also moving up in the CNCF are the SPIFFE and SPIRE projects for cloud native identity. SPIFFE, or the secure production identity framework for everyone, is a specification and SPIRE is the runtime that implements it on a wide variety of platforms. One of the requirements to move out of the sandbox was a security self-assessment which, rather fortuitously for a security product, was complimented for due diligence in security and threat modeling. Learn more about SPIFFE and SPIRE in Episode 45 with Andrew Jessup from Scytale, subsequently acquired by HPE.

ADAM GLICK: Google Cloud and their customer Bayer Crop Science have worked together to launch a 15,000-node cluster running on GKE. What do you do with such a massive cluster? The announcement blog suggests that running at massive scale reduces management overhead by limiting the number of clusters you need to run, enables massive internet applications-- think of all the games you could match-- and can drastically speed up data processing jobs. The latter is the use case for Bayer, who uses GKE to help make decisions about R&D on seeds based on their genotype.

15,000 nodes is 50% greater than the largest clusters, by node, cited to date, and three times the limit suggested by open-source Kubernetes. Clusters of up to 15,000 nodes will be available to GKE users later this year.

CRAIG BOX: Google's vulnerability management team has released Tsunami, an extensible, open-source network scanning engine for detecting high-severity vulnerabilities with high confidence. Tsunami scans network ports and then uses a series of plugins to verify vulnerabilities that need attention for administrators. The model is pluggable, so new vulnerability detectors will be continually added, with Google encouraging others to develop detectors for the project, as well. It launches with detectors for exposed, sensitive UIs, like admin consoles, and weak credentials, with remote-code execution detection planned for an upcoming release.

ADAM GLICK: Amazon has announced that their App Mesh controller for Kubernetes is now generally available. This allows you to add Kubernetes pods to your Amazon Service Mesh and register your pods in AWS Cloud Map. The service is available for customers in around 3/4 of AWS regions.

CRAIG BOX: New in storage this week, Dell announced PowerScale, a new line of storage software and hardware focused on unstructured data. The storage is programmable, with a connector for Kubernetes, and integrates with both on-prem deployments and cloud environments.

ADAM GLICK: Shuveb Hussain has released Gocker, an open-source, mini Docker-like container implementation in Go. In a blog post entitled "Containers the Hard Way," Hussain walks through his implementation of containers in Go, inspired by Bocker, which implements Docker-like containers in bash. As his first major Go project, he admits there may be some bugs and would love you to try it out and file issues against the project.

CRAIG BOX: Don't kid yourself. Kubernetes security is hard and so you'd be "maaad" not to check out the Kubernetes Goat, a new project from self-described security ninja Madhu Akula. Goat is a set of Kubernetes tools that nibble away to create an intentionally vulnerable cluster to help in security testing and vulnerability awareness. It goes without saying you should keep Goat well fenced-in and be very careful about running it outside of an isolated testing environment.

ADAM GLICK: StorPool, a software-defined storage provider, has signed a deal with Sardina Systems, a company known for building and running private clouds on OpenStack and Kubernetes, to provide a managed Kubernetes service on premises. The partnership brings the two companies' software and consulting services to help customers run Kubernetes apps while providing data persistence and disaster-recovery services.

CRAIG BOX: If you've been to the Kubernetes website in the past week, you may have noticed a significant change in how it looks. The site now uses the Docsy theme, with a C, for the Hugo static site-builder, which creates the official Kubernetes site as well as our own. Kudos to the team of folks who made this upgrade happen.

ADAM GLICK: Finally, from the "don't do this in production" files, Ron Eakins has posted his experience using Kubernetes with Oracle 18c in a container, in a cluster running on a Minikube, in a VirtualBox VM-- a full enterprise environment on a laptop. Credit to Ron for getting it all working as a play environment and writing up the experience on how you can do the same. He ends with a link to a post on putting Oracle 19c-- it's one bigger-- on a multi-node Kubernetes cluster.

CRAIG BOX: And that's the news.


ADAM GLICK: Sebastian Scheele is CEO and Co-Founder of Kubermatic-- until last week, known as Loodse-- and an organizer for the Container Days Conference. Welcome to the show, Sebastian.

SEBASTIAN SCHEELE: Thanks a lot, and thanks for the invitation to the show.

CRAIG BOX: Take us back, if you will, to 2015. You've just come off seven years of working at SAP. That was your first job out of school?

SEBASTIAN SCHEELE: Yes. I started directly after university at SAP in different areas, consulting and also in the product development of the Hana database. And yeah, after seven years I was thinking, OK, should I stay for longer or no, I want to have a new challenge? And I decided to move out and ended up with Kubernetes and really enjoyed the years from there on.

CRAIG BOX: What is it that you remember about the launch of Kubernetes?

SEBASTIAN SCHEELE: I was really an early adopter from Docker, using it for my private reasons really early on. And one of the challenges I was asking myself was, like, OK, now I have a Docker container and I know how to deploy this on one server, but how do I get this on multiple machines running? And don't want to deal with, like, Ansible or other tools at that time. And when Google announced Kubernetes, I was-- yeah! This could be the tool I was looking for.

And so I started playing around with it and was like, oh yeah, that's cool. That could be potentially not only something for Google Cloud. It could be good for us other cloud providers and also for on-prem environments. And then we decided later on, let's start a company around that.

ADAM GLICK: When I think about Hana, Hana is a massive database and Docker is the exact opposite-- compartmentalized, tiny pieces of code. That's quite a switch. What made you decide to make that shift, if assumedly your day job was on much larger, monolith projects?

SEBASTIAN SCHEELE: Every time, quite interested in what's new in technology and so I wanted to see, OK, we have this big, monolithic machine here with four terabyte of memory, not one process but multiple processes, and one server for all this. But what alternatives are out there? And so I was looking to microservices and Docker and I wanted to see what alternatives are there.

And so I started developing some stuff from my own and how to compose this, how to build microservices, and in a sense all the concepts and how do I also use this to deploy this, not only on my local machine or the tools on my server. I really enjoyed-- I need a database. I need my SQL, a Postgres database, OK. I start a container. I run it, and later I delete it and I have no dependency--

CRAIG BOX: And no data.

SEBASTIAN SCHEELE: [LAUGHS] Yeah, and no data. On my laptop. And so that was the reason, more like private interest and see, OK, what's capable, and what are other doing there outside of this SAP universe?

ADAM GLICK: At the same time, you also were making a shift from a world of proprietary software into a world of open software. Those are two very different worlds. What made you interested in shifting towards open software versus some of the more proprietary software that was out there doing a lot of the same things?

SEBASTIAN SCHEELE: SAP are still doing a lot of stuff in open source, and Hana was at that time running on SuSE Linux, so I had to deal a lot with Linux and open-source stuff. And of course, from a really high view, it's SAP, it's proprietary software, but I did a lot already, inside, with open source and open software. And so the move was not so big and I decide, OK, I want to do more and hey, cool, Docker is open source. I can easily try it out. I can use this.

So I start using more and more and also starting more and more, contributing to open source itself.

CRAIG BOX: One of the ways people create the community around open source in the real world is through meetups, and I understand that you met your Loodse co-founder, Julian Hansert, at a meetup.

SEBASTIAN SCHEELE: I met him at an event, not directly at a meetup but it was also a similar kind of event. And so we figured out that we both like, really, cycling and so first, we started doing a lot of sports together and cycling together. And during the cycling I was talking to him and say, hey, there's a new technology, Docker, I'm playing around, and Google now announced something completely new which is called Kubernetes. They can manage containers across several servers and really fancy stuff.

And so we start more and more looking into this and then decided, hey, let's start doing something more with it. And so the first thing what we did is like exactly creating a meetup and creating a community in Hamburg and Munich. And where we said, hey, where do we find people which have the similar ideas and where we can interact with? So we started the Kubernetes meetups and interacting with the people and getting also some ideas how others are seeing it and how others are planning to use this. And this is where everything begins, more or less five years ago.

CRAIG BOX: So what kind of conversations were you having with people at those early meetups?

SEBASTIAN SCHEELE: It was really interesting. First of all, it was like, how do we find people who are interested in this stuff? Because it was really, more or less, brand new. And I think people had similar challenges, like, OK, I want to scale my software and I want to run it on multiple machines, and what is a good way to do this? And I think that was, more or less, first use case around it. At that time, Kubernetes itself had not all these capabilities, what it has now. You have pods, you have services--

CRAIG BOX: And replication controllers.

SEBASTIAN SCHEELE: Yeah, exactly. And so no StatefulSets and a lot of other things which are now common, completely missing, and you could understand what are the first things you could use it? Mostly like stateless application. At that time, a lot of people were still saying, oh no, we never will use this for stateful application. I think if you ask now around, you would have some opinions in the markets, but yes, of course-- this was the first conversation around this.

ADAM GLICK: I'm curious how you decided on Kubernetes. There were a number of even open-source container management options at the time, but you, very early on, selected Kubernetes as the one that you wanted to spend your time and invest in.

SEBASTIAN SCHEELE: Exactly. At the time, a lot of other things-- Docker Swarm, Mesos, and Kubernetes. And there were a lot of different discussions and we were looking into that. And for us, when we started, we said, OK, we see Kubernetes there, that it can help to run your workload on all the different cloud providers in best case and also can run on-prem, and you really get the flexibility on that end.

So we really wanted to focus on this container workload stuff, so saying OK, the main purpose is really we want to run a container workload. And so we looked into what's the best framework and what's the best tool for running containers? And of course, on the other side also, Google is behind this. They have Bork developed and have a lot of experience in this area.

So there's a lot of knowledge already in this version and so we're quite confident that we say, OK, let's use Kubernetes first. But it was every time clear for us, if Kubernetes potentially is not the tool of choice and another is later one which is winning, potentially we will switch later on. We were quite open to that, but we said, OK, we need first focusing on one tool, and said let's focus on Kubernetes.

CRAIG BOX: In retrospect, a good choice. Did you look to solve a problem and then say, I will found a company around that? Or did you and Julian decide, all right, we want to get together and then go test the market and see what it is that we should build?

SEBASTIAN SCHEELE: More the second one. We started with meetups and getting the community together. Then we also started Container Days Conference, because the main idea was we saw that here in Europe, or in Germany, the local communities are quite good connected. So the people in Hamburg knows the people in Hamburg, and Munich in Munich, but there were not really a lot of inter-exchange. And at that time, there were not so many people. We were thinking, OK, how can we get more people together and exchange on the topics? And created Container Days as an event to get everyone together.

And on the same side, we started with professional service and consulting customers who were looking into Kubernetes, who have Kubernetes problems, and to really understand what challenges are they facing and how they tried to solve this, and how can we help them to solve this? And we're looking into that and seeing, can we see a pattern there which we see at every customer? And this is how we started the company, with first really understanding what is the customer's problem, and then later, we figured more and more out. OK, in what direction we can build a product.

CRAIG BOX: What is a Loodse?

SEBASTIAN SCHEELE: A Loodse is a Lower German word. We are from Hamburg, so harbor, containers, ships-- it's completely known to us.

ADAM GLICK: You see them all the time?

SEBASTIAN SCHEELE: Yeah, we see them all the time, but in a completely different context. A fun story beside. On the first Container conference, a shipping company was booking a ticket for it, and we were, oh, no, no. They are thinking about it wrong, but then we called them and say, hey, this is a software container. And they said, yeah, yeah. Yeah, we know. We want to use it. And we said, ah, good.

CRAIG BOX: We do both!


SEBASTIAN SCHEELE: We do both. [LAUGHING] Yeah, so we were quite used to containers and we were looking around and, what could be a good name? And we came up with the name Loodse, which is marine pilot, or harbor pilot, which comes on the last part of the journey of a container ship and brings you into the harbor. And we said, OK, that's a good analogy. Let's use this. And that's the reason why we chose the name.

ADAM GLICK: Once you'd selected a name, what were the mechanics and the process of actually founding a company?

SEBASTIAN SCHEELE: As I'm from Germany, it's quite--

CRAIG BOX: Bureaucratic?

SEBASTIAN SCHEELE: Bureaucratic. Yeah. You must register the company and following step-by-step, that's on the administration side. But yeah, from the building up the company, we really started like, OK, working with customers, helping them on solving problems in the space of Kubernetes and cloud-native, to really understanding what are the challenges, and getting our first employees on board who were working with us and getting the first steps running so that we really understand, OK, what is the problem and how can we solve this problem?

ADAM GLICK: One of the early things that you do with a company, you found the company and then you need to fund the company. How did you fund it? Was it a venture capital piece? Did you ask other people? Did you fund it yourselves? Where'd you get the money to run the company? Because you've grown to about 50 employees now or so? So you've grown a fairly substantial organization.

SEBASTIAN SCHEELE: Yes, and we were a little bit different because we were, until now, completely bootstrapped. So the engagement with the customers, working with them, helped us to bootstrap the company and getting revenue in the company to pay our development effort, and this was how we did it. Also I think, would we be in the US, it would be definitely a different story, but at that time, four or five years ago, the infrastructure investment part here in Europe, especially in Germany, was not like it is now, already.

So we said, OK, we need to find a way how we can build up the company, because everyone was still asking the question, Docker, will this be something? Kubernetes what?


Will this really be something? And we said, "yea-ah!" So we said, OK, we need to find a different way to go forward, and with working with customers, getting, on the one side, really deep insights and figuring out what problems they have in the real world. But on the other side, it helped us to start the company and grow the company.

CRAIG BOX: One of the first problems people had when Kubernetes was new was running it outside of the cloud environment. It's obviously very easy to hit the button on what was then called Google Container Engine and get yourself a cluster, but getting one in your data center was harder. And indeed, Loodse's original website said that it was founded to run Docker and Kubernetes in enterprise data centers. What did you need to do to make that possible?

SEBASTIAN SCHEELE: When we started with our customers, it was easy when we were on Google Cloud. You press a button, you have Kubernetes cluster work done, and you can really focus on the application.

CRAIG BOX: You're welcome.

SEBASTIAN SCHEELE: But when we were on different cloud providers, at that time they hadn't managed offering or especially in their own data center. We used a lot of different tools-- Ansible, Terraform, Puppet-- to help customers to set up the clusters, managing the clusters. But it had every time a similar journey-- the customers want to have, of course, first a cluster running. Then they need some upgrades of the Kubernetes cluster, and wanted to have later some more clusters.

And at some point we asked ourself, we want to have something like Google Container Engine on our cloud providers, and then it was like, hmm, can we build some kind of this? And so that was really where the original idea for Kubermatic was born. And said, hey, let's build a tool which can help us to manage Kubernetes cluster everywhere.

ADAM GLICK: Many folks in the Kubernetes world spend a lot of time focused on a developer-first experience, but you've talked a lot about IT organizations and focusing on making Kubernetes boring for operations teams. What drove your decision to focus more on the op side of the equation as opposed to the developer teams?

SEBASTIAN SCHEELE: Yeah. For us, it was really important. A little bit also from my history and from Julian's history-- both worked a lot with enterprises. Julian was before at EMC, so we know big enterprises quite well and we know how painful it is to operate systems in the big enterprises.

And at the beginning, we also helped customers get this operating and we saw every time this pain to get the development staff running. But the developers were, in a lot of times, faster to adopt this, but in the operation, it was really hard to get there automations there, depending on what stage of the organization was already. And we were thinking, OK, how can we enable them faster and how can we help them faster to operate clusters, and also to not spend so much time on manual operations so that it needs to be fully automated?

And when we started with Kubermatic, we were thinking, what is the best architecture to run Kubernetes? And we came up with the idea, can we leverage Kubernetes to manage Kubernetes? So what we built at that time is now called an operator. We technically built in Kubernetes operator which manages and runs Kubernetes on top of Kubernetes.

At times, there were no CRDs. They were no third-party resources. We used annotation at the namespace to store the information we needed, but technically, what we built at that time was really an operator. And so the whole DNA of Kubermatic and our company is really, we know from the ground up how to operators for Kubernetes and how to use Kubernetes to now also manage Kubernetes.

CRAIG BOX: If only you'd written a blog post about it at the time, you could have claimed the idea.



I was thinking about this, also. I should have put more effort into that tough time, but yeah. At least we know the concept and we used it and had the right decision to build the right architecture at the time.

CRAIG BOX: You started with the deployment tooling that each customer was using-- you mentioned before Ansible and Puppet and so on. Now you've built the Kubermatic container engine. First of all, great name.



CRAIG BOX: That now, as you say, builds on top of Kubernetes as its base and it allows you to give a push-button experience to people in their own environment. One thing that comes to mind there is the idea of using Kubernetes to bootstrap Kubernetes has also been adopted by your friends back at SAP. They have a project called Gardener, which works now in a similar way. I'm interested, was there any sort of cross-pollination between the teams working on that?

SEBASTIAN SCHEELE: We know each other, but not really. It's funny that this concept of using Kubernetes and also that running the masters of Kubernetes as pods on top of the Kubernetes cluster, SAP and we are doing more or less the same. But yeah, we had some smaller discussions, but not that were deep connections there.

I was at one meetup, it was like three years ago, where I presented a first idea, which goes in the similar direction like cluster APIs, that I wanted to have an abstraction layer for machines. How do I manage machines? Because at that time we had the problem, I need to integrate every cloud provider and I want to have somehow generic tooling around this. But this was more or less it, and when SAP announced Gardener open source, we were looking into it and was like, oh, this looks quite similar as we are doing it. Interesting.

CRAIG BOX: It's like how Newton and Leibniz both invented calculus at the same time, but you'll probably say, well it was Leibniz. He did it first.

SEBASTIAN SCHEELE: Yeah. [LAUGHING] Of course we would claim we were first.

CRAIG BOX: Around that time, also, you were starting to advertise more that you were supporting the Kubermatic platform running on Amazon and Digitalocean, so on vendors that did not yet have their own managed Kubernetes system. Did you find your customers were asking this because they were adopting cloud more? Or did you find that this was just simply a direction that made sense to go with the APIs that were available to you?

SEBASTIAN SCHEELE: We saw more and more that customers want to go hybrid. They want to run some stuff in their data center, but also more and more using the cloud and leveraging the cloud. And yeah, for us it was quite easy to adopt this and to manage this and so we extended more and more into this area.

And, of course, this also makes it much more easy for us to test, because on the cloud, as soon as you have the credentials, you can run your CI system against this. And so we started adopting this and later adding more and more cloud provider to it and also getting more and more abstraction so that it gets more and more easier to add new providers to Kubermatic.

CRAIG BOX: Both those companies I mentioned before now have their own Kubernetes service. It seems that's table stakes for many clouds these days. Were you expecting that everyone was going to build that and in some ways replace the thing that you'd built so soon?

SEBASTIAN SCHEELE: So soon? Good question. On the other side, it validates our point, what we had from the beginning on that we think Kubernetes is the way to go. So it was never a big concern that we say, oh no, they have now a managed offering. What we see now more and more-- I think, especially, bigger organizations-- it's not enough to have only Kubernetes clusters spin up. It needs much more.

You need to look into how do I enforce, globally, my policies? How do I take care about my government? And this, in best case, across all the different providers in a centralized way. And so that's also the direction we are looking more and more, is how can we help customers to have a single view on all of their Kubernetes cluster and manage their Kubernetes cluster in a scalable way?

CRAIG BOX: Is that how you would describe the Kubermatic platform today? It is more the management layer for many Kubernetes clusters across diverse environments?

SEBASTIAN SCHEELE: Yeah. It's both. It's like the management layer and also still spinning up a lot of Kubernetes clusters what we see now also, more and more traction and requests from customer is the whole edge stories so that you want to run Kubernetes at the edge.

I think it really depends about the definition at the edge, so where you have a more near edge, we really have, which is still more like a real data center where you have decent hardware with servers and things like this, but which is potentially not in your big data center. So where you have a full-fledged Kubernetes cluster with the masters and workers and storage, and good power connection, good internet connection. But then you also have more and more devices or clusters, which are much more far away from your main data center, and how to connect this.

And that's also an interesting fact. With Kubermatic, we also have the capability to run the master as container in your big data center, and the worker nodes can be at your edge location so that they can have the complex part of Kubernetes running the master and having the state of ACD and things like this centralized. And on the edge, you only have worker nodes.

ADAM GLICK: Out of curiosity, if you're running the master within a container in your Kubernetes, who is running the master for the Kubernetes that you're running inside of?

SEBASTIAN SCHEELE: You have one time this chicken-egg problem. That's the reason why we started to build kubeone, to solve this chicken-egg problem, where we were quite early on this whole topic, cluster API, and so we created our own machine controller which leverages cluster API to manage machines.

And so we used this for Kubermatic itself, so to spin up the worker nodes. And the only missing piece then was, like, OK, how can we spin up the master as VMs, as well? And so this is what we did then with kubeone, as mostly concentrating around how to spin up the masters and then leveraging a lot of technology which was already built for Kubermatic itself to spin up worker nodes.

CRAIG BOX: Is it theoretically possible, in the snake eating its own tail fashion, to have two Kubernetes clusters who each have their master running on the other one?

SEBASTIAN SCHEELE: Yes. I think there would be like a bootstrapping point, where you need to bootstrap the things, but I think, yeah. As soon as you have this, you could be building runnings in a ring so that each other is running the other master control plane.

CRAIG BOX: It does sound a bit like a perpetual motion machine to me. Let's maybe suggest that we don't do that in production.


ADAM GLICK: Recently, you made quite a big shift by releasing the Kubermatic source code into open source with your 2.14 release. What prompted that change?

SEBASTIAN SCHEELE: For us, it was every time a valid option to make Kubermatic open source. As we see now, more and more, that the market is going in the direction we are believing with, we said from the beginning, OK. And when we started developing Kubermatic, a lot of people were telling us no, I will run one or two big Kubernetes clusters and run all my applications and different teams will have different namespaces.

And we said, yeah, we are not convinced about this, though. We are more thinking about running a lot of smaller clusters because when you're running a few big clusters, the problems you try to solve at the application layer, where you want to have different teams independent and can update, you push down into the cluster because, at a certain time, you need to upgrade your cluster. And now also with new features, deprecated features in Kubernetes, then you first need to align across all the teams that it's possible to upgrade.

And so we said from the beginning on, we see more into you want to run a lot of smaller clusters, different teams, different organizations, different regions, different cloud providers, so there are a lot of reasons why you have more clusters. And now as, more and more, companies are doing this, we were thinking, OK, I think now it's the right time for such a solution and let's do it open source and let's continue our development open source.

ADAM GLICK: When people make things open source, one of the typical questions is which license to use. Kubernetes uses the Apache 2.0 license and you chose the same license. Did you ever consider using a different license for your source code?

SEBASTIAN SCHEELE: Not really. We were following an on-call model where the call part is really Apache 2.0 license, and as we are also using a lot of Apache code, I think it was clear for us we want to go the same way. And it was every time quite clear for us, Apache also, like kubeone, which was open sourced already before, the machine controller was Apache licensed. So we had some quick discussion internally if it makes sense to consider something else, but we said no. We believe it's the right license for us and we want to go with that license.

ADAM GLICK: What parts of the software are not released as open source?

SEBASTIAN SCHEELE: To understand this, first we need to talk a little bit about the architecture. So what we have is called the master cluster, where Kubermatic is running and where, also, the master of the user clusters are running as containers. And then you have to worker nodes, which could be on a cloud provider or which could be on-prem.

And in the open-source version, you mostly have the capability to run only one master cluster, but your worker nodes can still run in different cloud providers. You could even run on AWS, Google, but the master would all run, for example, when you spin up the master in Google Cloud, it would run all in Google Cloud and only the worker nodes would be on AWS or Azure or even be in there when you are on-prem.

To have the master and the workers on the same location, and have several of those master clusters and having them connected together in one UI, that's part of the enterprise version, which we are selling.

CRAIG BOX: Now, another change that comes along with this release is the name of your company has changed from Loodse to Kubermatic. Why now?

SEBASTIAN SCHEELE: Even that we really like this name, and we figured out that Kubermatic transports much more the vision of our company, that we really want to help our customers around Kubernetes and automation. And so we think Kubermatic is a much stronger word. And on the other side, yeah. The story behind the name, you every time need to explain and it's obvious for non-German speakers and English speakers, it's even hard to pronounce, so we said, OK, potentially--

CRAIG BOX: It looks like a Dutch word to me when I just see it on the page.

SEBASTIAN SCHEELE: Yeah, it's Lower German. It's still a valid word in Dutch, so yes, German and Dutch understanding the word but then it ends quite fast. And so we said, OK, if we now do this big movement for the company, potentially it's also time to think about once if Kubermatic is the much more stronger name for the organization. And so that was the reason that we changed it.

CRAIG BOX: Who makes a change like that? Is it a decision that you make alone as the CEO? Do you have now investors that you need to bring on board? Was it a whole company decision to make such a big change?

SEBASTIAN SCHEELE: Yeah, first it was me and Julian discussing about if it's now the right time. And of course, then we include more and more people and discussed inside of the organization what are the pros? What are the cons? And we came to the consensus that we said, yeah. We believe it's the right decision and the way we want to go forward. And then we say, OK, yeah. Let's go and let's do it open source and let's change the name.

ADAM GLICK: On top of running Kubermatic, you are also an organizer for the Container Days Conference. What is Container Days?

SEBASTIAN SCHEELE: Container Days is a conference really focusing around cloud-native technology, container technology. And the initial idea was really, let's bring the community together. As we're running a lot of meetups, so we were running meetups at that time, I think Hamburg, Berlin, Amsterdam, Kubernetes meetups. So we said, we want to get the community together.

I was, at that time-- or a bit before-- at first KubeCon in San Francisco. And so I really enjoyed, like having a lot of different people from different areas and you can exchange. And so we were thinking, OK, how can we bring better together the German or the European community? And so we started the conference. And last year, it was the fourth edition and grow quite big. So last year we had over 1,200 attendees. Unfortunately, because of corona, we need to skip this year, but looking forward to have it next year again.

CRAIG BOX: Did you consider running a virtual event?

SEBASTIAN SCHEELE: Yeah, we were considering it. We decided we would do more webinars and we had also planned for the KubeCon, a day-zero event called OperatorCon, to talk about all the new stuff which happens in the operators.

And when KubeCon was moved, we split it into a lot of webinars, smaller webinars, and we said, OK, that's potentially the way, also, we want to go with other events for now. And not completely doing a one-day or two-day, full-day virtual conference, but really looking into how KubeCon is going there, because I think this will show how good this can be, or if this really works. Yeah, really interesting to see that.

ADAM GLICK: Now that you've open-sourced Kubermatic, what does success look like for you and your organization?

SEBASTIAN SCHEELE: We really want to build up a community around this and getting more and more users to start using it who are facing problems-- how do I automate my operations? And how do I run not only one or two clusters; how do I run tens, hundreds, or even thousands of clusters? And working together with them and, of course, also looking for contributions and bringing new functionalities into Kubermatic and building an ecosystem around this. And yeah, then we will see what direction it will go.

CRAIG BOX: Well, we wish you all the very best, and it just reminds me to say thank you very much for joining us today, Sebastian.

SEBASTIAN SCHEELE: Thanks a lot, Craig and Adam, for inviting me to the show. I really enjoyed it and looking forward to speak to you soon.

CRAIG BOX: You can find Sebastian on Twitter at @scheele, and you can find Kubermatic on the web at kubermatic.com.


ADAM GLICK: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @kubernetespod or reach us by email at kubernetespodcast@google.com

CRAIG BOX: You should also check out our website at kubernetespodcast.com, where you will find transcripts and show notes, as well as links to subscribe. Until next week, take care.

ADAM GLICK: Catch you next week.