#88 January 28, 2020

VMs, Edge, and Platform9, with Madhura Maskasky

Hosts: Craig Box, Adam Glick

Madhura Maskasky is co-founder and VP of Product at Platform9, a company who manage both OpenStack and Kubernetes. She talks to Adam and Craig about the transition from VMs to containers, why OpenStack is still relevant, and what they have to do to be able to offer a 99.9% SLA on cloud-native applications.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.

ADAM GLICK: And I'm Adam Glick.

[MUSIC PLAYING]

CRAIG BOX: Big weekend for some of the countries around the world. Happy Australia Day. Happy Chinese New Year. Hard time to be a citizen of either of those countries, as it turns out.

ADAM GLICK: Yes, indeed. I hope things are getting better, both with the wildfires and the outbreak in Wuhan. And I hope that all of our listeners are staying safe and that their families and friends are doing well.

CRAIG BOX: Have you been out anywhere interesting?

ADAM GLICK: Been a little rainy here in Seattle, as is common for this time of year. But we have had a chance to start watching a new series on Netflix called "Schitt's Creek." That is S-C-H-I-T-T apostrophe S, just in case you're curious.

CRAIG BOX: The paddle joke.

ADAM GLICK: It could be. It's a pretty funny comedy. The episodes are short. They're like 20 minutes long. And it's just a fun romp, so if you're looking for something kind of giggly to watch. We're a few years late to it, but it's been fun.

CRAIG BOX: Let's get to the news.

[MUSIC PLAYING]

ADAM GLICK: VMware has launched Project Nautilus, bringing OCI containers to VMware Fusion, their Mac desktop product. Nautilus builds on the work done for Project Pacific, which embeds Kubernetes directly into their server side products. With Nautilus, containers can run in a "pod VM," running directly on the hypervisor for better isolation. This functionality comes in a tech preview which can be downloaded for free. The preview runs each container in its own VM, but product manager Mike Roy explains that this is leading to a world where containers, VMs, and Kubernetes resources all co-exist.

CRAIG BOX: Google Cloud announced the GA release of Config Connector, a Kubernetes operator which lets you manage other GCP infrastructure using the Kubernetes resource model. Using this operator, you don't have to learn and use multiple conventions and tools to manage your infrastructure. Config Connector manages over 60 GCP resource types and can be installed as part of an Anthos on GKE or on any standalone Kubernetes cluster.

Google also shared a Forrestor Total Economic Impact study of an Anthos, which shows over 4X return on investment as predicted based on studies of early customer transformations.

ADAM GLICK: Octarine announced the release of two new open source projects. The first is the Kubernetes Common Configuration Scoring System, or KCCSS, a new framework for rating security risks associated with misconfigurations. This tool is similar to the Common Vulnerability Scoring System, CVSS, which is the industry standard for rating vulnerabilities, but instead focuses on the configuration and security settings themselves, as well as trading a V for a K and a C in its acronym.

The second project is kube-scan, a workload and assessment tool based on KCCSS that scans 30 Kubernetes configurations and settings to identify and rank potential vulnerabilities in your application and help set a security baseline.

CRAIG BOX: Fancy managing your clusters from your phone? Kubenav is a new open source app which lets you navigate your Kubernetes environments. It's built on the cross platform Ionic framework, so you can download it from the App Store or Google Play or get desktop versions for Mac, Windows, or Linux. A 1.1 release is already out, and the author, Rico Berger, is actively fixing early issues and helping early adopters.

ADAM GLICK: Permissions Manager is a new application from Italian company SIGHUP that lets you create Kubernetes users and map them to roles and permissions from a web interface. It came out of an internal use case and has been open sourced in case it's of any use to the community.

CRAIG BOX: Tired of your pods dying naturally and want to speed up the process? Eugenio Marzo from Sourcesense has posted a blog on gamefying chaos engineering with an unusual twist on the classic Space Invaders game, where you kill one of your pods every time you destroy an enemy. Perhaps not the most practical way to chaos test your systems, but I'd argue it might be more fun. The game joins other pod killing showcases, such as Whack-a-Pod and Kubernetes DOOM.

ADAM GLICK: Patrick Ohly of Intel has posted about the challenges when using the container storage interface and ephemeral inline volumes. He covers how the original goals in 1.15 were that you would have a ephemeral or persistent volumes and access them differently. As the need to do ephemeral storage in a pod became evident, users needed a way to define this kind of storage. His post goes through the use of these inline volumes, and also calls out some of the challenges that upcoming releases aim to overcome.

CRAIG BOX: Pick up your pen and sign up to help write docs for Kubernetes. Zach Corleissen from the CNCF, and guest of episode 5, has posted a wrap up of the last year in his SIG. There are positives. He calls out a median review time for a change of around one day and a 20% increase in page views. But the underlying message of his post is that the tech writing team hasn't scaled with the project and more help is needed urgently.

ADAM GLICK: Dell/EMC has announced a container storage interface for their Isilon brand of storage devices. Supported operations include volume provisioning and deletion, snapshot creation and deletion, creating volumes from snapshots, and shared storage access for NFS file shares across multiple pods.

CRAIG BOX: The CNCF has released their annual report, highlighting the growth of the community and its projects. Community growth has been strong, with over 50% new member organizations and 89% growth in the end user community. 12 new projects joined the foundation in sandbox status, while Fluentd, CoreDNS, containerd, Jaeger, and Vitess all graduated. The report goes into much more detail about the events, community, and projects.

ADAM GLICK: KubeCon and CloudNativeCon are coming up March 30 through April 2 in Amsterdam. And our friends at the CNCF have done something very special for our listeners. We have a discount code that you can use to get 15% off your ticket to the event. The code is KCEUGKP15. And if that sounds like a lot to remember, don't worry. You can find the code in the show notes on our website.

CRAIG BOX: Congratulations to TriggerMesh, who built a server-less platform on top of Kubernetes. They closed a $3 million seed round from Index Ventures and Crane Venture Partners. If you want to learn more about TriggerMesh, you can check out episode 28, where we talk to Sebastian Goasguen.

ADAM GLICK: And finally this week in pricing news, Amazon has cut $0.10 off the per hour cost of EKS clusters.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Madhura Maskasky is co-founder and VP of product at Platform9. Before that, she worked at VMware and Oracle on distributed systems. Welcome to the show, Madhura.

MADHURA MASKASKY: Thank you.

CRAIG BOX: Platform9 started in 2013 as an OpenStack company. Working beforehand at commercial vendors like VMware and Oracle, you will have seen the demand for virtualization in the closed source space. What made you start the company open?

MADHURA MASKASKY: When we started the company, we did that because we felt really passionately about two specific trends that we were convinced were going to be there to stay for the long term. One was open source. Containers had started gaining popularity by the time we left VMware, which was end of 2013. OpenStack was definitely there and was in its hype cycle at that time, so it was really popular. Linux, obviously, had been popular for many years by then.

And we felt pretty strongly that the future of infrastructure management software is going to be built all in the open, and we wanted to play an important role in that future. So that was one trend that was important to us. And then the second one that was of equal importance is SaaS managed infrastructure, or managing distributed open source software through a SaaS management framework. So those are the two things that we started the company with.

ADAM GLICK: Did you ever think of that in terms of managing things that were both on-prem as well as in the cloud? Because people talk about SaaS, they think about cloud. But your background was actually these managed infrastructure platforms on premises.

MADHURA MASKASKY: Yeah. Absolutely. So the SaaS managed part of our vision was definitely inspired by the public cloud. You could see the power of SaaS Management when Amazon came out with AWS. And what we felt is that there are two important innovations as part of a public cloud. One is the software that manages your infrastructure. And then the second part is being able to supply infrastructure on demand. And we felt that if you decouple the two, then you could bring that software aspect of having a software that is automated to the highest degree that can manage infrastructure anywhere, not just infrastructure that sits in the data centers of the cloud vendors.

Then that innovation could be effectively even more powerful, because it would let customers take infrastructure lying in their data centers, or edge locations, or co-locations, and then combine that with this highly automated software and utilize that existing infrastructure, just like they would utilize the public cloud. So our vision was to build this SaaS managed infrastructure software that will let our customers take capacity anywhere in there on-prem data centers, but also in their public clouds to form a true hybrid cloud, if you will.

CRAIG BOX: Does it matter to you what provides the infrastructure in the cloud environment? From the perspective of your SaaS control plane, would you rather that the customer was running infrastructure that is managed by you on the cloud, or could they use GKE or AKS or something?

MADHURA MASKASKY: There are certain parts of it that do matter for us, and I'll tell you why. So we, for example, today do not manage Kubernetes that's created by GKE or EKS, for that matter, because we think by going to the IaaS layer of the cloud providers, we can fundamentally provide an experience that's much more consistent with the experience we provide with other cloud providers. It's also an experience that we can control better, and hence we can guarantee the 99.9% SLA, which we do as part of our contracts to all of our customers.

And we find that it's difficult for us to commit to that SLA if we're managing GKE where a lot of things are not in our control.

CRAIG BOX: We've transitioned to a little to talking about Kubernetes, which is obviously a transition that Platform9 made as its customers were looking to adopt Kubernetes. When did you see that that was a thing that people were looking to do, and what did you do in response?

MADHURA MASKASKY: We started noticing-- the trend around containers was there even as we started Platform9 in early 2014. And Mesosphere, they were utilizing containers effectively for big data infrastructure. But then Kubernetes really started becoming the thing that we started hearing about from some of our web-scale customers. I would say in late 2016, early 2017 or so, I remember having a in-depth conversation with the co-founder of Box at that time. And he was one of the early adopters of Kubernetes. And he's gone out and written lots of blog posts about it.

And we were sitting in the room with multiple of their technical folks and our team, and he was walking us systematically through the various challenges and problems that Kubernetes has let them solve, even in its earlier phases. And that was just really impressive to us. And then we followed that by doing a structured comparison internally between Mesos and Kubernetes. And we felt that Kubernetes is built a certain way and it has a certain kind of organic community that is backed by Google, and hence it has that potential of becoming the container orchestration platform that will gain a lot of popularity.

So that's really where Kubernetes started catching our attention.

ADAM GLICK: You mentioned that you started the company in kind of the big hype cycle around OpenStack. And that wave has kind of come and gone. Is it still relevant to both your business and customers?

MADHURA MASKASKY: OpenStack continues to play a pretty important role for a lot of our customers still. We have a number of case studies with customers like Cadence Design Systems-- semiconductor company-- S&P Global-- that's a financial company-- and they continue to be really happy OpenStack customers of ours. And these are just a few of the multiple names. And the reason is if you are an enterprise company today, then you know that virtualization is going to play a role in your infrastructure for some time at least, because there are certain workloads that just does not make sense for them to be containerized. There is a non-trivial overhead in containerizing any workload.

And so then once you're convinced that you're going to need to run containers and VMs side by side as an enterprise, then the next question is, who are you going to use to run those virtual machines? And VMware is the obvious choice, but many enterprise customers really feel the "v-tax" in terms of the VMware software being expensive at times. And so the only good open alternative that's out there today is OpenStack. And we've invested in making OpenStack extremely easy to consume.

And big credit of that, I believe, goes to the SaaS managed deployment model, which fundamentally breaks through some of the challenges of deploying and upgrading OpenStack. And so our customers end up deriving a lot of value in that managed virtualization stack, as we call it, through our unique deployment model that we've coupled OpenStack with. So that does continue to add value to our customers, and hence it continues to be an important part of the story for us.

CRAIG BOX: When you look at OpenStack and its relationship to Kubernetes, you can run Kubernetes as an application that runs on top of OpenStack. But then you can also use Kubernetes as the layer that orchestrates OpenStack to deploy VMs. And there are newer products, like KubeVirt, for example, which are other ways of running VMs on top of Kubernetes. Let's look at both of those.

So first of all, you need an operating system to run Kubernetes on. And in a lot of enterprise cases, they will still want that virtualization. Do you see that as being somewhere where OpenStack is particularly valuable?

MADHURA MASKASKY: I would probably say it a little differently in that OpenStack, I think, has an opportunity to play a role in being that IaaS layer provider, the Infrastructure as a Service layer provider, which would have operating system but many of the components in addition to that, right? So OpenStack has that potential of continuing to be the virtualization layer that Kubernetes can run on top of.

However, I would say that KubeVirt, and you just mentioned it-- it's definitely been gaining popularity recently. And it's been of a lot of interest specifically to us. We've written a couple of blog posts about it. My co-founder, Roopak Parikh did a webinar on it recently, as well. And it's because-- going back to the question that we were just talking about, we believe that VMs and containers are going to continue to co-exist, for some time at least, for various reasons. And OpenStack has good components that can continue to add value in that picture.

But we see a lot of opportunity in moving some of the components that are managed by OpenStack today toward Kubernetes with KubeVirt, so that Kubernetes then becomes that single layer that manages both VMs as well as containers. And then you complement that with some aspects of OpenStack. So just Ironic, which is a component of OpenStack that lets you manage bare metal infrastructure-- and it's a component where a lot of investment has been happening from community recently and it has reached a really good level of maturity. We use it internally for doing a lot of our bare metal orchestration for our cloud that we run.

So that's kind of how I see the future evolving for both our stack, that we offer to our customers, but also in general for the community.

CRAIG BOX: There are a few vendors that are looking to use Kubernetes primitives and the Kubernetes API model to manage the operating system that Kubernetes runs underneath. And that sounds a little inception to me, because the whole idea of how do you do that if the operating system isn't there to start with is the chicken and egg problem. Do you think that that is a model which is going to take off?

MADHURA MASKASKY: I think that there are various layers of that model. And I think there's different ways of doing it. But I think there is a version of that model that definitely has potential. For example, Cluster API-- it's an open source project for standardizing Kubernetes cluster creation and deployment, et cetera. And we've been active contributors to that project for some time. And Cluster API, the way it recommends deploying your Kubernetes clusters is by starting with something that they call a bootstrap cluster. So you start with this bootstrap cluster, and then there's a little bit of that chicken and egg problem, which is how do you create the bootstrap cluster.

So you have to follow some kind of primitive way to create that cluster. But then that bootstrap cluster becomes the entity that will then run Cluster API. It will then create multiple clusters that you're actually going to be using for your workloads and et cetera. And we see value in that model a lot. In fact, it's the kind of model we've been following internally for our deployment of our managed Kubernetes software, where on behalf of each of our customers we've been creating a very similar model behind the scenes that runs in our cloud. So yes, I think we see a lot of value in that model.

ADAM GLICK: What would be your advice to someone who's managing hypervisors in an enterprise today, be it a HyperV manager, or someone who's managing VMware, as they think about Kubernetes but might not be familiar with Kubernetes yet?

MADHURA MASKASKY: I would say there are many really good tools that are out there today-- many open source-- to get started with Kubernetes. There is definitely a learning curve with Kubernetes. And sometimes people make the incorrect assumption-- we've seen this with our customers-- of trying to apply the primitives you're familiar with from virtualization straight to Kubernetes. And when you do that, then I believe you come with questions like, hey, why is there no story for migrating Pods from one cluster to the other? Well, it's because while you're used to VMotion and VMs migrating in virtualization, that really doesn't translate that well to a container world.

So what we typically recommend to our customers is start with the basic principles. Take something like kind or kubeadm and just apply it on your laptop-- create a single node Kubernetes clusters. And even if you are a DevOps manager, et cetera, we strictly recommend that you actually go through those steps, and play with it, and familiarize yourself. So I think that's just kind of first order. Then as you build a level of comfort, then what I typically recommend is there are multiple open source tools as well as vendors that are out there. And the beauty of the world-- that open source world we live in today-- is that all of them have a free version of some sort available today.

So pick a couple and test them. Don't go ahead with a particular solution based on its theoretical components or properties that you're reading through. Actually go ahead and do the tests with different alternatives that are out there to figure out exactly what would work for your environment.

ADAM GLICK: If I think about on prem in the past has been the thing that people have focused on in terms of their data centers or co-los. And the new on-prem thing that I hear people talking a lot about is edge. That seems to be the new word for on-prem to make it cool again. What are you seeing in terms of Kubernetes and the edge at this point?

MADHURA MASKASKY: I think the word edge, again, is kind of going through its hype right now. And there is so many different connotations to edge and different contexts in which it can get used. For example, there is a notion of a thick edge and a thin edge. Thin edge is typically what you would call a deployment model where you have massive number of IoT devices, or let's say small devices with some kind of embedded operating systems and very, very limited hardware sensors and et cetera that are sending a lot of data to some intermediate gateway or so.

And then there are thick edge devices. I think some good examples of these are your point-of-sale software servers that are sitting in individual Starbucks location, for example, or a container store location, or et cetera.

ADAM GLICK: Everything old is new again. The thin edge is the new word for IoT, and the thick edge is the new word for what we used to call remote office or remote locations.

MADHURA MASKASKY: That's true. Yeah. The remote office, branch office, pattern was so popular back at VMware when we were there-- coming up with new terminologies today. But what is distinctly changing between the world in the past and now, is taking example of that thick edge, for example. While in the past it was OK to just have some kind of dumb limited device that could sufficiently act as a register for you to run your transactions as, say, a retail vendor-- that's no longer sufficient for these vendors to continue to stay relevant, because, for example, for a company like Starbucks, they want to provide a highly customized experience to their end users when they even enter in the store.

For example, if you were to enter in, ideally they would pop up a message on your phone that says, welcome. Here's your Pumpkin Spice Latte that we've made it ready for you, or something of that sort. And they're able to do that because of AI, and machine learning, and so. But that requires certain devices with a certain level of intelligence to be deployed at that edge for latency and other reasons. So what's happening is that the pattern is evolving in a way where a number of these vendors from the retail vertical or many other adjacent verticals are required to deploy what you'd call a large number of micro data centers.

And a micro data center would be just either a single server or at the most a single rack with a top of rack switch or so. So that adds a layer of complexity that previously on-prem vendors just did not need to deal with as much, because all of these intelligent micro data centers now need processes to keep them upgraded, and updated, and to be able to push intelligent software on demand to these devices ideally, because you want to continue evolving and staying relevant. So those are some really interesting challenges coming up with edge.

And we think Kubernetes can play a really interesting role, because containers are really ideally suited to run on these end devices, we believe. And then Kubernetes can be that intelligent orchestrator that could be managing a local pool or collection of these micro data centers and pushing software actively to these different nodes that are part of those clusters.

CRAIG BOX: Your Platform9 managed Kubernetes product is offered with a 99.9% SLA over cloud, or on-prem, or edge environments. There's a famous quote from Alan Kay that says "people who are serious about software should make their own hardware." But when you're trying to run something across multiple other people's hardware, what does it take to be able to offer such a thing on environments you don't necessarily control?

MADHURA MASKASKY: That's probably the hardest part of building what we do today. Some of the learnings we took from VMware is Diane Greene used to say that VMware is so powerful because it's one of the most disruptive non-disruptive softwares that has been built. And in that non-disruptive part was really important to the VMware founders, because they wanted to make sure that their virtualization layer just completely seamlessly integrates with any hardware that's built by any vendor. And they realized that they needed that to reach a level of popularity.

And we kind of realized that, as well. We felt that if we want to stay true to our vision and make it a reality, then we really have to integrate with the various versions of Linux and hardware that our enterprise customers are going to have, because enterprises do not like to be dictated in terms of the hardware you need to deploy. So it was probably the biggest challenge that my co-founder Bich Le, for example, and Roopak Parikh had to tackle. What it meant was creating various layers of abstraction in our software that goes on customer's premises.

And being prescriptive but with flexibility, which is we typically say that these are the networking plugins, for example, that we test with and that work, but you have a choice of integrating your own. But this is the impact it will have on the SLA. So those kinds of flexibilities and trade offs that we always need to draw.

ADAM GLICK: You also offer a managed application as a service. And you offer the three nines SLA with that, as well. How have you built that to make that possible?

MADHURA MASKASKY: When we looked at Kubernetes specifically, we realized that today it's no longer sufficient to just provide Kubernetes as a vendor, because every customer who runs Kubernetes needs to run at least two to three of the most popular systems applications to keep it really up and running and be able to deliver the SLA that they need to deliver. And those are-- the most popular ones are Prometheus for monitoring and at least some kind of log collection capability or frameware, including Fluentd for log aggregation and pairing that with the Elastic stack for indexing of logs and sociability, or something like Loki, which is a little bit more lightweight. And then finally, service mesh.

These are the three that we see in that order being almost required for every enterprise customer. And so what we realized is that A, it's no longer sufficient to just provide Kubernetes. You need to deploy it with at least these applications. It's ideal for your customers if they're built into your company's stack. But what's most important is deploying these stacks just as a Helm chart or an Operator is not sufficient. In order to deliver the right experience for your end customers, you really need to think about these apps and their lifecycle fundamentally the same way you look at Kubernetes.

And so Prometheus, for example, has a certain recipe for creating a multi-tenant highly scalable deployment Prometheus that includes deploying Cortex and a few other components, which you don't usually get if you just apply it as a Helm chart. Which is why we realized that in order to deliver the best experience, we really need to package these as what we call managed apps, which means each application is thought through fundamentally from the perspective of what is the best configuration that can provide value to customers, and then how do we manage the lifecycle of those components. So that's where the managed apps initiative is important.

CRAIG BOX: One of the components that every community is user has to have is the etcd database. And inspired by the work done on kubeadm to make it easier to deploy Kubernetes clusters, Platform9 developed an etcdadm, which I always like to refer to it as-- this is Adam, my co-host, the etcd Adam. What was the goal behind that project?

MADHURA MASKASKY: It kind of went back to what you said, which is we were users of kubeadm. And we ourselves felt the need for a component like etcdadm to manage a highly available cluster of etcd nodes. And our team realized that there isn't something out there today in an open way. And so that was the reason for creating it. And then we use it internally, and we were happy that the project got a good level of reception in the open source community, as well.

CRAIG BOX: There is an etcd operator, which I think was originally developed by CoreOS. Are these projects complementary, or do they serve different purposes?

MADHURA MASKASKY: I guess in a way they're separate in that etcdadm kind of fits or is similar to Kubeadm and would fit more in the larger picture of Cluster API or so, which is building more of a standards-based way of deploying Kubernetes clusters, et cetera. So building something like etcdadm required collaboration from a lot of other members in the community and having them bless it, et cetera. The operator model, in my mind, at least, kind of draws a separate parallel. And now an operator could co-exist in a world of etcdadm or so, because once the etcd cluster is deployed, an operator could own some of the lifecycle aspects of the cluster and et cetera. But fundamentally they are kind of separate initiatives, I think.

ADAM GLICK: Etcdadm is not the only open source cloud native project that Platform9 is involved in. You're also involved in the Cluster API.

MADHURA MASKASKY: Mm-hmm.

ADAM GLICK: What work are you doing on the Cluster API?

MADHURA MASKASKY: We've been some of the contributors to the Cluster Lifecycle SIG. And what's been particularly important to us is one, the building and definition of all of the API primitives that are being built for the various parts of Cluster API so that you have that right standard, again, that then can be used for the various providers and AWS providers, Azure provider, and all of those that are being built. And then the other part that's also important to us is the bare OS, or the bare metal, or on-prem component of it.

A majority of our customers tend to have an on-prem portion to their cloud deployment. And so we've been interested in, and we're going to continue to invest in, that part of the cluster API to make sure that the on-prem provider for bare metal and in future for virtualization, et cetera, evolves and has the capabilities that we think that it should have.

CRAIG BOX: After every KubeCon event, you publish the results of a survey that you did at that event. What insights have you aggregated from these events in the past and how, if any, have they changed your thinking about Kubernetes or your product roadmap?

MADHURA MASKASKY: The survey ends up being one of the most valuable sources of data for us from our product roadmap, et cetera, perspective. We also keep a close eye on the KubeCon survey that CNCF itself creates and publishes. But some of the trends that we want to continue keeping an eye on through these surveys are where are our customers-- or just visitors at KubeCon-- where are they deploying Kubernetes today? Where are those trends evolving? Is on-prem-- does it continue to be of importance to folks, or is the trend shifting more towards deploying Kubernetes primarily in the public cloud?

The second and most important aspect to us is what are some of the biggest pain points today that the customers, or these end users, are running into? And how are they evolving compared to the previous KubeCon? You see some really natural evolution patterns in terms of what were the pain points in the past have probably been simplified to some extent with the community work or vendor originated work. And now folks are moving on to the next order of pain point. So those are the two things that tend to be of most importance to us.

ADAM GLICK: Your company is called Platform9. Where are the other three quarters?

MADHURA MASKASKY: [LAUGHS] We tried to get a domain name that could incorporate all of that, but it's just really difficult to fit in. Where our name comes from is from Harry Potter. My co-founder Sirish has been the biggest fan of Harry Potter. I myself didn't read the books but loved the movies. And so Platform 9 and 3/4 is that mysterious platform in Harry Potter series where for the people who know about it, that's their gateway to enter into the magical world. And odd analogy was that ours is the platform that lets you enter into the magical world of cloud.

CRAIG BOX: All right, Madhura, thank you for sharing your magical world with us today.

MADHURA MASKASKY: Thank you.

CRAIG BOX: You can find Platform9 at platform9.com. And you can find Madhura on Twitter @madhuramaskasky.

[MUSIC PLAYING]

CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have feedback for us, you can find us on Twitter @kubernetespod, or send us an email a kubernetespodcast@google.com.

ADAM GLICK: You can also check out our website at kubernetespodcast.com, where you can find transcripts and show notes. Until next time, take care.

CRAIG BOX: See you next week.

[MUSIC PLAYING]