#59 June 25, 2019

Banzai Cloud, with Janos Matyas

Hosts: Craig Box, Adam Glick

Banzai Cloud is a cloud-native software company that builds Pipeline, a managed Cloud Native application and devops platform, featuring tools for managing multi- and hybrid-cloud Kubernetes deployments. Pipeline is open source, and Banzai Cloud has many other interesting open-source projects, including a Kubernetes distribution, and operators for things like Vault, Kafka and Istio. Adam and Craig talk to its co-founder and CTO, Janos Matyas, who is based in Budapest, but is spiritually of Oahu, Hawaii.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

ADAM GLICK: Hi, and welcome to the Kubernetes Podcast from Google. I'm Adam Glick.

CRAIG BOX: And I'm Craig Box.

[MUSIC PLAYING]

Adam, has being a dad changed your musical tastes?

ADAM GLICK: I would like to say no, but I have found myself finding a lot of very interesting music that I can only call contemporary kids music, maybe. For any of the people that used to listen to Atom and His Package, or may have checked out some of the Parry Gripp stuff that is online. There's a lot of just fun stuff.

"Baby Shark", which was a huge phenomenon, there's a trap mix of that that is quite catchy that we'll put a link to. And my personal favorite that I discovered this past week was Koo Koo Kanga Roo, which is a group out of Minneapolis describing themselves as the Beastie Boys meets Sesame Street that just have some really catchy, fun stuff.

CRAIG BOX: You're making me feel very out of touch.

ADAM GLICK: [CHUCKLES] What have you been checking out this week?

CRAIG BOX: I went down to the village fete in the next village over on the weekend, and they had a fantastic dog show where you'd have a number of different categories that the local people would enter the dogs into. My favorite was the "dogs who look like their owner" category. There were a bunch of delightful little children dressed up with dogs that were probably not quite as happy as the little children were about being dressed up.

But there was one occasion-- the winner overall, actually, was a guy who had a dog called a Puli, which is a black dog with thick dreadlocks. And the gentleman had long, thick black dreadlocks as well. And I'd say that was clearly the best case of the dog looking like their owner on display. Well worth best in show.

ADAM GLICK: Let's get to the news.

[MUSIC PLAYING]

CRAIG BOX: Kubernetes 1.15 snuck out last week to somewhat little fanfare, in keeping with the increased maturity of the project. 1.15 is a bit of a snow leopard release, similar to that of last week's Istio 1.2 with continuous improvement explicitly called out as one of two headline features. Enhancements to extensibility-- primarily in the custom resource definitions-- are the other major set of changes. kubeadm, sometimes pronounced "cube adam" also has a new logo with a ship's wheel inside an atom. This further reminds us that Kubernetes users can't tell the difference between the letter T and the letter D, and we can't wait to see how they wadder down the English language next.

ADAM GLICK: Lyft announced the preview release of Envoy Mobile, a client network library for iOS and Android that brings the power of Envoy to mobile platforms. The library supports key Envoy features like HTTP2, QUIC, and gRPC, as well as configuration via the xDS API. The team is looking to make interactions with the library something entirely abstracted away from application developers by language-specific APIs. Unlike the initial release of Envoy, which was production-ready from its release, Envoy Mobile is described as a proof-of-concept demo, with Lyft's team still working on the features needed to get the library integrated into their clients.

CRAIG BOX: A couple of weeks ago, we reported on the security vulnerability in docker cp. This week, we have one in kubectl cp, which is actually another path to exploit a vulnerability first disclosed in March. Remember, kids, copying files is hard. New kubectl binaries are available for all the recent releases.

ADAM GLICK: Kontena, our guests on episode 31, have announced version 2.4 of their Pharos platform. Of note is that the Kontena Lens dashboard, one of Pharos' flagship features, is being split out into a separate project. Lens will be available as a commercial add-on to any conformant Kubernetes cluster later this year.

CRAIG BOX: The CNCF is adding some structure to its ever-expanding stable of projects by introducing CNCF special interest groups, or SIGs. The technical oversight committee this month blessed the concept of SIGs, suggesting there would be six or seven SIGs and formally starting some security and storage. Sheesh! SIGs will help assist the TOC by evaluating projects in the area and are modeled on the similar groupings in the Kubernetes community.

ADAM GLICK: Aqua Security 4.2 is out, bringing with it Aqua Vulnerability Shield, a technology that detects and prevents attacks, targeting known vulnerabilities in containers. vShield uses automated vulnerability and component analysis, combined with human security research, to generate runtime policies that can detect and block access to vulnerable components in containers. It's designed to help you balance the risk of having to take a vulnerable component offline immediately versus leaving it in, insecure but usable, by blocking access to various resources during runtime.

CRAIG BOX: Scytale, guests of episode 45, have released Scytale Enterprise 1.0. The product is a broker, which connects to identity providers, including those running on-prem and issues cloud-native identities in the SPIFFE format to your container workloads. Scytale suggests that identity providers are normally only scaled for the static workloads of pre-cloud native environments, and that their technology will help handle identity in bursty or fast-changing systems.

ADAM GLICK: Diamanti has released its 2019 container survey with some predictable results and a few that come out in the rough. It starts with a huge change in who is driving container adoption, with IT jumping up 18% to take the wheel in over 35% of cases, while development is content to take a backseat at about 16.5% of the cases.

Of course, this comes with a predictable pain felt by almost 24% of the organizations that responded, finding that a lack of container and Kubernetes knowledge to be a major adoption inhibitor. On premise continues to be the most common place where people are deploying containers. Databases are increasingly popular, with over 30% of people trying out a database in a container. This was the second most common workload in a container after the somewhat catch-all category of cloud-native applications.

Security has moved into a top concern category, edging out IT integration this year. And interestingly, over one-third of companies spending over $100,000 on container software are doing it on bare metal, citing performance as the number one reason, while the number two one is cost avoidance. Hey, hypervisor makers, they're looking at you.

Perhaps unsurprisingly, running on bare metal comes with additional challenges, which include management complexity and keeping things up to date, which are topping that list.

CRAIG BOX: Finally, some architecture news from Google. A paper published this week suggests that the overhead in having an in-memory cache on the network-- letting your workloads be stateless-- is more than that of having a cache in the same process as the application and then having the workloads be stateful. Modern automatic sharding technologies, like Slicer from Google or Ringpop from Uber, can help overcome the perceived downsides of stateful applications, improving overall service performance.

ADAM GLICK: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Janos Matyas is the co-founder and CEO of Banzai Cloud. Prior to co-founding Banzai Cloud, Janos founded Sequence IQ, a startup which containerized and pushed big data workloads to the cloud and was acquired by Hortonworks in 2015. He's interested in cloud-native distributed systems and loves that Kubernetes superseded the container orchestration platform he'd built in 2014.

Janos enjoys free skiing, surfing, and naming his company's products after his favorite surf spots. Welcome to the show, Janos.

JANOS MATYAS: Thanks for having me.

ADAM GLICK: Let's start by looking at Banzai Cloud-- the company and the history. What exactly is the company? And what problem was it founded to solve?

JANOS MATYAS: It's more like an evolution. So we started the company late 2017 in order to make running enterprise applications on Kubernetes [possible] purely based on managed Kubernetes by cloud providers. And then at the end, we ended up building a hybrid cloud platform. So basically, it was a long evolution for the company.

ADAM GLICK: The company is based in Hungary?

JANOS MATYAS: Yes, so we are in Hungary, Budapest.

CRAIG BOX: What's the tech market like in Budapest?

JANOS MATYAS: It's pretty good. There is lots of talent, lots of universities. So we have access to good engineers and a bunch of big companies that are over there, like IBM, Hortonworks, Cloudera.

CRAIG BOX: Now, your primary product is Pipeline. Tell us about Pipeline.

JANOS MATYAS: The purpose why we built Pipeline was to speed up development of applications, basically, to allow developers to go from commit to scale on Kubernetes on these cloud providers who support managed Kubernetes. It provisions Kubernetes clusters on premises, in the cloud, either running our own distribution or running on top of managed Kubernetes distributions. And then basically, it gives out-of-the-box logging, monitoring, autoscaling, security for all these applications.

ADAM GLICK: So this seems a little different than some of the other things that I've seen in this space, in particular that you're managing, essentially, other managed Kubernetes offerings. What made you decide to look at managed Kubernetes offerings versus going bare metal or straight with your own distribution?

JANOS MATYAS: Initially, our idea was that we cannot build Kubernetes, and we don't understand the cloud better than those who have built their own cloud. So basically, it's like, you know, we always used to use GKE. And we had an idea that GKE is great, it provisions Kubernetes fast. It gives everything you need when you are about to do Kubernetes.

But it doesn't go far-- high enough in the stack. And what I mean under that is that prior to found this company, it would only be that the workloads in containers. We had an idea that we should run Spark, TensorFlow, Kafka, all these Big Data workloads on Kubernetes-- automate the whole provisioning and automate the whole monitoring of these things-- and also, Node.js, Spring Framework, Java, and other applications.

So yeah, going back to the original question, we supported this cloud provider managed Kubernetes. But at the end, we ended up making our own Kubernetes version, mostly running on-prem.

CRAIG BOX: Is Kubernetes the thing that your customers want, or are they looking for the platform to run Node.js or Ruby or something?

JANOS MATYAS: We have basically two types of customers at a high level. So we have the SaaS companies, who are running applications in the cloud, and they don't really care about Kubernetes. For them, the most important thing is that they can run their own applications fast and easy, and they can deploy it on these managed Kubernetes offerings or our Kubernetes version. And they get out of the box monitoring, logging, centralized security, and all these things.

So these guys, they don't really care. Basically, the whole Kubernetes stuff is abstracted. It's abstracted behind an API, a CLI, and a UI. And often, they don't even interact with Kubernetes. They interact with the Pipeline platform. Obviously, underneath is Kubernetes, and all the building blocks are built on CNCF components. But these companies, they don't really care what they run.

They like Kubernetes because of the flexibility it gives for them. Obviously, they like being able to run in the cloud. So these guys, they don't really care about Kubernetes.

And at the [other] end of the spectrum, we have the enterprise customers, where they made a decision, and they made a bet that Kubernetes actually is the stack they want to run their own applications. And the cloud provider, they made a decision on a certain distribution. And then, basically, they want to deploy, go through the same things, like the SaaS providers. They want to deploy their own applications. But often, they want to build internet as a service offerings on top of Kubernetes. So for them, it's a bit different than for smaller SaaS providers.

ADAM GLICK: You provide some optimization tools with your platform. What do those optimization tools do for each of your customers?

JANOS MATYAS: Yeah, we have multiple tools. As I said, this was a long journey for us. And along this way, we have built a couple of open source components, which allowed us to optimize usage and costs for our customers, even when we were running purely on managed Kubernetes.

So we have a system called Cloud Info, Basically, we track services and costs from cloud providers. So it's a unified way to track these prices and available instance type services across five providers through unified interface-- API, CLI, and UI. So this is one thing which we offer our customers. Basically, they can choose either the cheapest cloud or they can choose, using this system, they can select cloud providers, for example, where TPUs are available when they're running TensorFlow.

And then we have another system, which is open source again, called Telescopes, which basically abstracts the whole resource management. So Telescopes allow users to claim resources. For example, they can say, I need 48 virtual CPUs and X amount of memory. And probably, I want TPUs or high-availability network. And then they submit into the system, and the system tell us these things-- the infrastructure recommendation for a particular cloud provider.

CRAIG BOX: So with your Telescopes tool, you're able to help people make choices automatically about where to add or remove nodes from their clusters?

JANOS MATYAS: By default, yes, but, obviously, they can override those choices.

CRAIG BOX: Do you find that people are willing to move only between regions? Or will people actually move to a different cloud in order to run a workload cheaper?

JANOS MATYAS: I think multi-cloud is happening, so it's a real use case, not in the sense that-- so most of our customers are still running on a single cloud. However, they want the option to be able to move either for regulation, for other purposes to a different cloud. So basically, they want to keep the door open if they need to move from this cloud provider to the other cloud provider. So in a sense, yes.

ADAM GLICK: So you're doing billing, letting people know kind of what their bill will be by looking at the prices. You're also making recommendations of what they might want to do in order to optimize that. Does it take the next step as well of deploying to that-- if someone says, I always want to be running on the lowest cost-- to move those workloads around for them?

JANOS MATYAS: Yes. There is one minor note. So we don't do billing. So basically, the customer is running with their own account. So they bring their own Google credential, for example. We just recommend the instance types for them, or we just recommend cluster layouts.

ADAM GLICK: Are you also taking a look at proactively doing-- if they say, I always want to be running on the lowest cost, for instance, will it proactively move them between instance types, regions, or geographies, on-prem, cloud in order to be able to optimize the cost? Or does it make the recommendation, and then the operations team has to decide, OK, go do that?

JANOS MATYAS: So we don't do automatic migration of workloads. So basically, if someone would like to do that, it's available. So we have our own custom scheduler, and we track prices, and we track services across cloud providers. But by default, we don't move workloads. Basically, we operate these clusters for the users, but we don't move their workloads.

There is only one way when we move their workloads. But that's happens within cloud provider from instance to instance. When they are running, for example, on preemptible instances, and those instances are claimed back by the cloud provider, in that case, we have a custom schedule which drains the nodes and then, behind the scene, brings back probably different instance types and moves the workloads from the claimed-back instances into the new ones, but not across cloud providers yet.

ADAM GLICK: Pricing tends to be relatively static and change in months or several months. Are you also taking a look at things like preemptibles and spot markets that are much more dynamic in price?

JANOS MATYAS: Yeah, so in the case of Google, obviously, it's much more simpler than in case of AWS, where prices fluctuate. So we track prices. We have price history probably back now since we've been live-- late November, I guess. So we have price information across all the spot markets and instance types. But what we have noticed, either in case of Google or AWS-- we haven't done that for Microsoft yet-- we have noticed that the instance types being taken away. It's not because of price but it is because of capacity.

CRAIG BOX: Do you happen to remember if there's any occasion where one zone has always got capacity and runs cheaper? Or does the market mostly work as it's meant to?

JANOS MATYAS: I guess the market works as it's meant to. However, we noticed, for us it's 6:00 PM or 9:00 AM in the US. So we noticed these spikes when the US west coast wakes up.

CRAIG BOX: Now you mentioned that you predominantly run on the managed infrastructure services, like GKE, that are provided by the cloud providers. But you've also blogged recently about some cases, where those services didn't provide what you need. And you've actually used your own deployment system to manage Kubernetes services that you run for your Pipeline customers on those environments. Why was that?

JANOS MATYAS: We never meant to build our own Kubernetes distribution. However, we ended up doing one. So obviously, managed Kubernetes differs from cloud provider to cloud provider. So we are always perfectly happy with what Google offers with GKE, for example. And we have no our build a managed Kubernetes for Google, for instance. We have one for AWS, and we have one for Microsoft.

And the main reason why we have built that is because we need access either to new features. Usually, for example, on GKE, it's available through feature flags, or we give access to API servers. And there are many things like why people would love to have access to APIs, and most probably, they use a different authentication and authorization model.

We use Dex, for example, in our case behind the scene. And we have a system which turns JWT tokens into RBAC rules. So this especially happens for bigger enterprises. They are not really happy integrating their security model into the cloud provider security model. So they need something where they can plug their own stuff. That's one thing.

The other thing is it's minor things, like speed. So there are some cloud providers, where their managed Kubernetes offering is painfully slow. And there are people who are running CI systems on Kubernetes. And for them, it's not acceptable if something comes up in, say, five minutes instead of 50 minutes, you know?

ADAM GLICK: You've also started to build things that are both hybrid and multi-cloud, both running on-prem on people's own hypervisors and then on other clouds. What inspired that decision to spread out that way?

JANOS MATYAS: Even since last year, summer, for us, multi-cloud was a reality. So it means reality in a sense that being able to deploy deployments across these five managed Kubernetes providers. So we have a concept called a cluster group, where we've been able to group Kubernetes clusters into a group.

CRAIG BOX: It's a good name.

JANOS MATYAS: Yeah. And basically do deployments across these cluster groups with overrides. So for example, if somebody was pushing out a deployment into Google or AWS, it was able to push a different configuration for Google or the other cloud provider. And they could override their configs.

The same way we're managing the lifecycle of these deployments, it was a very simple lifecycle management. It still is. We manage updates, deletes, and deployments. This is the multi-cluster software.

People, they don't really care about those applications. They don't really interact with each other. Their primary goal is that they have an application, and they need to distribute across multiple cloud providers. So for example, they need to publish to a geographic region, where one of the providers, for example, they don't have access, or they don't have a data center.

CRAIG BOX: Do you expose the differences to someone -- are you're saying, here's a single control plane, like the old federation model, where you deploy to that control plane, and then it abstracts them away? Or do you have a model, where the user sees I have one on premises in Budapest, and I have three in cloud on Google and one on Microsoft, for example?

JANOS MATYAS: It's abstracted, although at the same time it's available for them to see what they have and where they have. Actually, we have things like when somebody comes to our control plane and sets ups these multi-cluster groups, for instance. They are able to setup include and exclude lists. And they can add data centers, or they can remove data centers. They can add VM flavors, for example. They say, OK, we are not allowed, for this particular group, to spin up clusters where they have TPUs, for example, because its costs us a lot. So for training, they can go with regular compute clusters. So we have these include-exclude lists.

CRAIG BOX: Now, there are some other open source components that you've developed as part of this platform and made available to the community. What are Bank-Vaults?

JANOS MATYAS: Bank-Vaults is a Vault operator at a very high level. So we've been using-- prior to the Red Hat acquisition of CoreOS, we were using the Vault operator quite a lot, so basically, with all the secrets, for example, cloud provider credentials, access to different systems, Grafana dashboards. Everything we've been storing in Vault.

Obviously, that project has been discontinued. So basically, we set up a new Vault operator and also, it's a CLI tool which initializes Vault. So we've been adding lots of features to operate more properly on Kubernetes. And then we've been pretty much-- obviously, in the case of cloud provider, you've got a certain level of security from the provider that you know your secrets. We don't know whether, for example, it's using an etcd, or if it's a shared etcd whatever secrets are stored. But we wanted to avoid secrets being landed on etcd, for example.

So Kubernetes secrets, for us, was something which we wanted to avoid. So with Bank-Vaults, beside operating Vault, now, it's able to inject secrets directly into Pods at a time when it's needed, and then is removed from the process. So it injects directly straight into the Pod. It never ends up on etcd or any other means.

ADAM GLICK: You mentioned that you built a Bank-Vault operator, as well as, I think, you've been working on some other operators. What made you decide to use the operator model of deployment versus other things, like CRDs or Helm?

JANOS MATYAS: We've been doing lots of operational codes. So I like to say that operators somehow are-- it's human operational knowledge embedded in code. So we've been doing this since we had the option from Kubernetes with CRDs. Then we've been involved with the CoreOS guys and the Operator SDK. I guess that was back in 2017, if I remember correctly.

Even before the Operator SDK, we were involved into that. And we liked the concept, so all our code which was managing bits and that, we've been now moving into operators. And we have open sourced quite a lot of operators. So we have a logging operator, for example, which configures Fluentd, Fluentbit automatically and moves your logs for all pods into a centralized location, a bucket, for example, or Elasticsearch.

We have this Bank-Vault operator which operates vault. We have a Kafka operator, which we recently opened sourced, and then probably there are a bunch of others.

CRAIG BOX: Well, let's talk about the Istio operator which you released recently.

JANOS MATYAS: We've been using Istio, I guess, since version 0.8, the first time when the multi-cloud, multi-cluster stuff showed up, it was very premature, but we loved what Istio offers. And I was talking with Adam before the show -- we really like this Istio stuff. Istio has been released open source by all the companies, and basically, it leveled the whole playing field for us. And we were able to build pretty cool things on top of this.

So we were using Istio for a while. We were deploying Istio with Helm. We had actually no problem deploying Istio with Helm. But as we were going into multi-cloud scenarios, then, basically, we started to make some templating in order to create multi-mesh clusters. And operating these clusters was starting to take up lots of time from us.

Obviously, we went and checked what's available and what's not available. And there were not really other Istio operators available at that time. But there was a huge interest from the community to build an Istio operator. So basically, we decided to go ahead and build an Istio operator, because we needed not just to install Istio across these hybrid environments. It's actually we needed to operate it.

CRAIG BOX: And Louis Ryan mentioned to us last week, the Istio project is going to ship its own operator. Is that built on your work?

JANOS MATYAS: Partially. So right after we released the Istio operator three or four months ago-- I can't really remember-- late winter, basically, we got in contact with a couple of folks from Google. So Martin Ostrowski contacted us, and he proposed to do this work together in the community. And we were really happy moving ahead with that.

So for us, we use Istio-- great open source product. We have built our own Istio operator. But for us, operating Istio, it's not the end product for us. The end product, it's operating hybrid clouds that are built on Pipeline. So we didn't want it to have our own Istio operator, and probably, the industry would come up, or the open source community would come up with a different Istio operator. And it's tested and backed by all these companies, like Google, for example, so yeah, we are pretty pleased that we are part of this initiative.

Currently, Google, Red Hat and us are working on this Istio operator, the official one. So ideally, our goal is to move all the features which we have in the current version of our Istio operator into the official Istio operator and, basically, let the community pick up from there. Obviously, we are dedicated to support the work on this.

CRAIG BOX: Do you think that will hold true for, say, the Kafka and Fluentd communities? Do you think that soon there will be a single operator provided by that community or the vendor who makes the product?

JANOS MATYAS: For Kafka?

CRAIG BOX: For any of them. Do you think it will become something that each open source project has to offer in order to have their workload run on Kubernetes?

JANOS MATYAS: Ideally, yes. In case of the Istio operator, obviously, we've been lucky that Google has seen value in this and is willing to work with us in the open source to continue this project. You mentioned Kafka. For example, in Kafka, I'm not sure. Ideally, I would love to have one Kafka operator. There are, like, two or three Kafka operators out there.

So the reason why we were not working on any of them is first, we have a thing with Go. For example, the strimzi operator is built on Java. So I ended up doing Java a few years ago.

ADAM GLICK: I'm sorry for that.

JANOS MATYAS: Yes. The Confluent operator, we don't know too much about that. I guess it's closed source. But with Kafka, yeah, I would like to have one Kafka operator. But conceptually, we do a very different thing. And what I mean by that is we have a huge experience operating Kafka, for example. And also, we know Kubernetes quite a lot. And I believe the current Kafka operators out there are not really the best way to run Kafka on Kubernetes. They're all built on StatefulSets, and we believe that this is not the right way.

ADAM GLICK: What is the right way?

JANOS MATYAS: Think lower level building blocks like Pods, CRs. We love StatefulSets. Obviously, it's a great concept, but the worst thing which we don't like about StatefulSet is ordering. So for example, in case of Kafka, we wanted to have fine-grained configuration, broker-by-broker configuration.

And also, if you wanted to remove a broker for maintenance or whatever, let's say, from the middle of the cluster, you can't really with a StatefulSet. You have to remove all the brokers down to that particular number, because that's how StatefulSet works.

CRAIG BOX: So you almost need, like, a Kafka StatefulSet or something that describes the relationship between these things that's an extension to that.

JANOS MATYAS: Exactly. And this is the nice thing about Envoy-- that Envoy allows you to do these protocol filters, for example. And there is one going to be built for Kafka.

ADAM GLICK: At KubeCon, you made a number of announcements. What were they?

JANOS MATYAS: I think the most important announcement is the service mesh on Pipeline. So basically, we have the multi-cloud support, and we have the hybrid cloud support. What we have announced, actually, is a unified control plane, which runs on these multiple cloud providers, and it's able to create service meshes across either multiple cloud providers or on premise and cloud.

CRAIG BOX: And what will that enable for your customers? What are the benefits of them running a multi-cloud service mesh?

JANOS MATYAS: Yeah, the good thing is we've been working on this for a while, and Google announced Google Anthos a month or two months ago. First of all, it was very nice in that it assured us that we are towards the right direction, you know? At least myself personally, I believe if Google is doing it is probably is good for the industry. So that's one thing.

The other thing is what it means for our customers is, basically, it's able to out from on premise into these multiple cloud providers. So we have a case study where, for example, we have a customer who's been running mostly on AWS and on premises. They track lots of data. They store lots of data on Google Cloud, and they started to build services that are moving into Google Cloud, GKE.

So they built all those services on GKE, and now, they wanted to put into a mesh. So this gives us the option to layer these services into common mesh running across on-prem and two cloud providers.

ADAM GLICK: The name Banzai Cloud, where does that come from?

JANOS MATYAS: Banzai Cloud, initially, when we started the product, I said, the company wanted to build something similar to OpenShift, a simple enterprise application platform running on managed Kubernetes. And we believed that OK, this is going to be a pipeline which allows developers to go from commit into production.

So yeah, since we were thinking that this is a pipeline, you know, I like naming my companies or my projects after a surfing spot. So I was thinking, OK, well, there is a famous spot on the north shore of Oahu called Banzai Pipeline. So let's name the product Banzai Pipeline. But it was quite long, so basically, we ended up naming the product Pipeline, the Pipeline platform and naming the company Banzai Cloud. Naming things is pretty hard.

CRAIG BOX: Do people ever confuse you with the art of small trees, the Japanese art of bonsai?

JANOS MATYAS: No, not really.

CRAIG BOX: What would you name your next company or product?

JANOS MATYAS: Well, I don't want to build a next company! I feel perfectly happy here. I did manage a couple of products-- Telescopes, Hollow Trees, Pipeline. So these are all service parts either in Oahu or Mentawai, Indonesia.

CRAIG BOX: There's telescopes-- I assumed it was just telescoping something. There's a surf spot called Telescopes?

JANOS MATYAS: Yeah, so I'd like to maybe start a new project, and we know what it's doing. Then I will hunt to find a surf spot for the product which somehow resembles what the stuff is doing.

CRAIG BOX: Probably easier than finding an ancient Greek word, which is what everyone else tries to do.

JANOS MATYAS: Yes. When we were about to do the company, we wanted to come up with something similar. But we are not really good on Greek mythology and Greek names. So we ended up--

ADAM GLICK: That's a Banzai Cloud with zero Ks.

CRAIG BOX: I like it because it's got an N and a Z in it.

JANOS MATYAS: And the product doesn't have a K.

ADAM GLICK: What's next for Banzai Cloud? What's on the roadmap?

JANOS MATYAS: Good question. As I said, we have launched the service mesh stuff. Obviously, we are doing some other projects, a couple of other operators for big data. So basically, we have Spark, TensorFlow, Kafka, and Zeppelin and running. So we want [to productize with somehow a platform out of this for operating big data in the cloud.

We also have some other application stacks - Node.js, Spring. So basically, we are working on those things. But the main direction for us at this point is investing into this hybrid cloud stuff.

CRAIG BOX: All right, Janos, thank you very much for joining us today.

JANOS MATYAS: Thanks for having me.

CRAIG BOX: You can find Janos on Twitter at M- A- T- Y- I- X, underscore. How would you pronounce that?

JANOS MATYAS: "My-tix."

CRAIG BOX: Is it a Hungarian word?

JANOS MATYAS: Yeah, it's my nickname from my family name. They're strange, Hungarian names.

[MUSIC PLAYING]

ADAM GLICK: Thank you for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter at @KubernetesPod or reach us by email at kubernetespodcast@google.com.

CRAIG BOX: You can also check out our website at KubernetesPodcast.com to find share notes and transcripts. Until next time, take care.

ADAM GLICK: Catch you next week.

[MUSIC PLAYING]