#175 April 6, 2022
Bruno Andrade is founder and CEO of Shipa, delivering applications and policy “as code” to Kubernetes with a SaaS model. We discuss founding companies in Canada vs the USA, abstractions for deploying apps, and whether Kubernetes will really ever disappear.
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Craig Box.
CRAIG BOX: April Fool's Day came and went without too much fanfare this year, which is exactly how I like it. The volatility of the world in the month of March led to most tech company April Fool's things being canceled for the third year in a row. Can we keep that going, please? Most of the time, they're just corporate ads. Being a whole day ahead of the US here in New Zealand, you can't even assume anything you read on the internet on April the 2nd is valid. I do like things that are fun, but not contrived.
I assume all the Americans know this, but there's a late-night show hosted by a guy named Jimmy and another late-night show hosted by a different guy named Jimmy. They swapped shows for the day, which was amusing. For anyone listening to this podcast on TikTok, a late-night show is content on a thing called linear television that now exists solely to make YouTube clips the next day.
I read the blog of Ron Gilbert, the guy who created point-and-click adventure games like "Maniac Mansion" and "Monkey Island." For the last few years, he's posted every April of the first saying how he hates the concept, and his blog has been proudly April Fool's Day free for almost 20 years. This year he said he was going to make another "Monkey Island" game. OK, ha-ha, nice one.
Except today, it turned out that he had spent the last two years secretly doing just that. Turns out he also tweeted in 2013 if he was going to make another "Monkey Island" game, he'd announce it on April the 1st. Well played, Ron. Well played. That's the way to do April Fools. That or Gmail, I guess. That was pretty epic too. Let's get to the news.
CRAIG BOX: Grafana Labs has announced Mimir, a new time-series database built on a fork of the Cortex project. Mimir touts improvements including a 40 times reduction in query time over Cortex, at the cost of relicensing to the AGPL. A year ago, Grafana Labs announced that they're relicensing Grafana, Loki, and Tempo under the AGPL in response to other companies taking those products and offering managed services based on them.
Cortex is a CNCF project and those must be licensed under the Apache License. So Mimir is a fork labeled version 2.0 and the designated successor project. With this, Grafana has stopped contributing to Cortex. Mimir is a figure in Norse mythology, and apparently, the name was chosen so that they could combine it with Loki, Grafana, and Tempo to make an LGTM stack joke. Grafana also announced a way to render Doom to a time series panel out of a hackathon, and it's worth pointing out that that project is licensed under Apache 2.0.
Security platform StackRox is now open source. The StackRox toolkit handles risk analysis of a Kubernetes environment, delivers visibility and runtime alerts, and provides recommendations for hardening your environment to proactively improve its security. It integrates at all stages of the container lifecycle — build, deploy, and runtime. RedHat announced the acquisition of StackRox just over a year ago. The project is now the upstream source for Red Hat Advanced Cluster Security.
The original founding team from Docker are back, with the covers lifted from the new startup this week. Dagger is a portable development kit for CI/CD pipelines. It promises an escape from YAML, using Google's Q language and automation written in any language using Docker's Buildkit project. The company behind Dagger announced a $20 million Series A funding round led by Redpoint Ventures.
We talked to Matt Butcher, creator of Helm in Episode 102. He's since left Microsoft, along with most of his Deis Labs research team to a new startup, Fermyon, where as CEO, he is working to bring WebAssembly to the cloud. Fermyon has just introduced Spin, a framework for web apps and microservices. Spin provides an interface for writing WebAssembly modules that can do things like answer HTTP requests. Aside from being particle themed — a fermyon is like a boson or a hadron — Spin lets you run locally with a one-of-a-kind command, spin up. Matt, we salute you.
Google's Distributed Cloud Edge is GA. The Anthos-powered service is designed to help telco customers to run 5G core and radio access network functions at the edge or help enterprise customers run any number of factory or retail workloads. GDC Edge is a fully-managed hardware and software solution with options ranging from individual one-use servers to full racks.
Conference news. The schedule for this year's IstioCon has been released. The virtual event runs for five days starting April the 25th. Also announced this week is PlatformCon, a virtual event for platform engineering on the 9th and 10th of June.
A white paper from security company Chainguard analyzes various container base images of popular open source projects. Don't run the Debian Node 17 base image, as it has up to 1,000 known vulnerabilities. The Alpine image for the same has a much more manageable 0 to 4. Props to authors John Speed Myers and Zac Newman for working a pop music reference into the title.
After selling off its enterprise business and refocusing on developers, Docker has hit its stride in terms of its new business model — charge those developers money. The new Docker has raised a $105 million Series C round, valuing the company at $2.1 billion, outstripping its peak valuation of $1.3 billion in 2018. The round was led by new investor Bain Capital Ventures.
Meanwhile, Berlin-based end-to-end cloud delivery platform Garden.io also announced $16 million raised in a Series A round, co-led by 468 Capital and Sorenson Ventures. We previously reported on Garden's seed round but failed to directly spell out the pun.
Finally, our weekly status update on the Ever Forward — still stuck. They're offloading it now, or as we like to call it, draining it. Spare a thought for the lady moving from Hong Kong to New York whose container load of belongings is stuck in Chesapeake Bay. My own container load of relocated furniture has long made it to these shores, but we've got so used to living out of suitcases, we haven't even bothered cracking it open yet. And that's the news.
CRAIG BOX: Bruno Andrade is the founder and CEO of Shipa, which provides an application-centric layer for deploying, managing, and securing cloud native applications. Prior to founding Shipa, Bruno joined Juniper Networks as a senior director of software engineering as the result of the acquisition of HTBASE, his previous venture. Welcome to the show, Bruno.
BRUNO ANDRADE: Thank you for having me here today.
CRAIG BOX: There are a lot of guitars over your shoulder as I speak to you today. I understand that, like many of our guests, you have a background in music and then you somehow made the pivot to software engineering.
BRUNO ANDRADE: The original goal was actually to be a musician. I attended a conservatory. I love all things music, but I realized I was not good enough to make the cut. I decided to make the pivot to software engineering, which I realized I was not good enough, as well, but then it was too late to change.
CRAIG BOX: Is there a chance to be sort of half-good at both things and find a career that way?
BRUNO ANDRADE: I don't know. I tried to pick it up again, but going more towards the orchestration side and attending Berklee College of Music. I love it. But it requires time, and as part of a startup, it's tough. Plus now, I'm losing hair, as well, so the rock star life is pretty much gone at this point.
CRAIG BOX: You were talking musical orchestration, not container orchestration then?
BRUNO ANDRADE: Musical orchestration — much, much easier. Or no, maybe not.
CRAIG BOX: Once you decided that software was the path to go down, was that something that you went and studied at university?
BRUNO ANDRADE: Yeah, I went for computer engineering. That was great. I found it to be a little bit more focused on hardware than I expected. But I still enjoyed it. I definitely loved the space.
CRAIG BOX: How did you then end up working in Canada?
BRUNO ANDRADE: I used to work for IBM at the time, and I was part of the WebSphere group building, specifically, the application server, which funny enough, and not many people agree with me, but I see a lot of similarities between WebSphere at the time and Kubernetes, at least how we expose things. Because a large part of my team at the time was in Canada, then there was an opportunity for me to come and live in Canada and work with the team there.
CRAIG BOX: Perhaps a little bit of a culture shock, especially in terms of the weather.
BRUNO ANDRADE: It was amazing. The interesting fact is the first time I landed in Canada was right in February, so pretty much winter at the time. But my first experience out of the country at the time, I loved it. Over the years, I didn't love too much. But yeah, it was great.
CRAIG BOX: I spent a couple of years living in Canada, and I noticed that of the 12 months of the year there was snow on the ground visible for 9 or 10 of them. [LAUGHTER] It starts in October, and the piles are still there in June.
BRUNO ANDRADE: Yeah, you kind of start considering moving to Australia or New Zealand at that point and getting kind of the best of both worlds. But Canada is a great place. It's pretty clean, pretty safe, pretty nice.
CRAIG BOX: Tell me about the kind of things you were working on when you were with IBM and the career path you had before starting your own companies.
BRUNO ANDRADE: Yeah, IBM, I used to work with the WebSphere Application Server group or WAS, at the time. It was an infrastructure solution. It was catering for the needs of the developer, specifically focused on Java and so on.
I did spend some time working on the MQ or message queue for WebSphere as well, but most of my time was basically working on WebSphere and how we could support Java applications at a large scale. It was a great experience. It gave me a lot of insight on how developers were operating and how operators actually were managing WebSphere and scaling that infrastructure or that middleware infrastructure in large organizations.
CRAIG BOX: There is quite a pronounced difference in perception, at least, between the operations and SRE side and the development side in the Kubernetes world we live in today. Was that also true for WebSphere, with the two groups that very much thought of themselves as different things?
BRUNO ANDRADE: I don't think so. Because I like to joke around saying, we were all sysadmins. If you look at DevOps, SREs, DevSecOps, and whatever acronyms we have these days, they came up over the last few years. So now we have those fancy names. But when we were building WebSphere, we were all system administrators at the time, and we used to do everything. We didn't have that kind of breakdown between DevOps, platform engineering, SRE, and so on.
CRAIG BOX: Where was cloud in this space? Was this a time when cloud was becoming a thing or was this still very much on-premises deployments?
BRUNO ANDRADE: Cloud was becoming a thing. I would say at least the time I spent with WebSphere, it was that thing that every bank and every major organization, they would come back to you and say, we never would use that thing because it's not safe enough. We were scaling data centers still mostly on-premises. All this stuff, at least all the WebSphere implementations I've seen over that time, they were all on-premises. Cloud was coming, but it was just that thing that people would reject and say they would never use it.
CRAIG BOX: Is that part of the reason that you started your own company to work in cloud?
BRUNO ANDRADE: Not really. After IBM, I had an opportunity to work at Oracle. I had a very different experience at Oracle, a very different product. The experience on working on cloud was that I saw the development lifecycle struggling, right? There was the application being tied to infrastructure so much and every time there was a change in the infrastructure, that would directly impact how applications are managed and delivered, which then ultimately impacts the business and overall how the team is seen.
Cloud was great. We moved to cloud because APIs did an awesome job on hiding the infrastructure complexity details. So we're now defining what we need for our applications to run, and our developers are free to run these apps. So that was kind of the main reason why I wanted to move to cloud.
CRAIG BOX: The kind of customers you would be dealing with between IBM and Oracle, though, I would have thought they were mostly the same. They would have had the same aversion to cloud. Was it just the passing of time that made it possible?
BRUNO ANDRADE: The passing of time and they started seeing more and more organizations — at least close enough to them, right, in terms of size or industries — adopting cloud, and the benefits from a business perspective were a lot better. Working at IBM and Oracle, every time you had a new customer adopting the product we were building, we saw them ordering hardware, which could take weeks or months to get in place, and then rack the hardware and then have a team to install Linux in it and get everything up and running.
On the other side, if you have a credit card, my friend, the world is your oyster with a credit card and the cloud, right? Organizations, they saw that they could get in front of their end customers. Banks could release their applications to end users a lot faster. So that was one of the main drivers.
CRAIG BOX: But what about those poor people who used to rack the servers and install Linux? What are they going to do?
BRUNO ANDRADE: I think a lot of these folks may have transitioned over towards more of the cloud management aspect of things. And now you see a lot of the AWS architects or Azure architects, and these folks, I think they slowly transitioned over to cloud management perspective.
CRAIG BOX: I should note other clouds are available.
BRUNO ANDRADE: [LAUGHS] Exactly. You used to have a lot of self-hosted and on-premises stuff running, so I'm sure there's a place for everyone.
CRAIG BOX: So HTBASE was set up as a multi-cloud company. Were you seeing that as a need for people at that time?
BRUNO ANDRADE: At that time, we saw organizations trying to spin up and consume infrastructure from multiple cloud providers, especially larger organizations. They always had that fear of being locked into one vendor. Plus because of GDPR at the time, they had to have different services hosted on different locations. Then not every cloud had a data center in a specific location. So that was a need for many organizations but mostly, mostly larger enterprises at the time.
CRAIG BOX: Did you start HTBASE in Toronto or in California?
BRUNO ANDRADE: I did start in Toronto, in a city called Mississauga, which is 20 minutes away from Toronto. We ran the office in Mississauga, and I ended up moving to California because of our funding round and later on the acquisition.
CRAIG BOX: The airport's in Mississauga. It surely wasn't that far to commute.
BRUNO ANDRADE: Fifteen minutes from the airport. And Pearson airport is great. It's really easy to move around and, yeah, no problems at all.
CRAIG BOX: But you didn't feel that there was a chance to keep the company running from Canada at that point? You decided that with the fundraising you needed to be where the money was?
BRUNO ANDRADE: Fundraising is tricky, right? At the time, I was traveling to California two or three times every week. Most investors are here in the Bay Area.
They like to invest in founders that are located in California. And a lot of that is because they like to participate and help as much as possible the company to grow, introduce the partners to customers, and so on. So for them the idea of investing in an infrastructure company out of Canada kind of concerns them on how much can we help them. That's one of the key reasons also I started kind of fundraising here and moved here.
CRAIG BOX: Do you think that that's true today, especially with the move to remote work since the pandemic?
BRUNO ANDRADE: Honestly, it's still the case for the founding or leadership team perspective. I think being closer to where your partners are — and I see the VCs as partners, smart money — I think it's still true. Now, for the remainder of the team, I think it's a lot less relevant these days on where the team is located or distributed. But I still see that VCs, they still have a little bit of trouble and concern investing on, especially early stages, right, that you are just building the motion, the team, and so on. They still have a little bit of concern investing completely outside their zone of comfort.
CRAIG BOX: You founded the company and then you were also there at the acquisition by Juniper Networks. Tell me about the process by which you were found and acquired.
BRUNO ANDRADE: At the time, Juniper had a vision of building a multi-cloud layer that would include the orchestration of compute, network, and storage in a way that their customers could consume whatever cloud provider and whatever resources they would find best for their applications. We started conversations with Juniper from an investment perspective. I had an opportunity to meet with the CTO at the time, and then the CTO found the technology to fit the vision where he wanted Juniper to go at the time, and that opened up the opportunity for acquisition, and discussions went on from there.
CRAIG BOX: The acquisition was announced in November 2018. Were you a Kubernetes company at that point, or were we still dealing more with the underlying infrastructure?
BRUNO ANDRADE: We were dealing with the underlying infrastructure, but the orchestration at the point was already Kubernetes, especially because all the workloads that we were deploying in services, they were all container-based at the time. And that was one of the frustrations. Because again, we're deploying these services using Kubernetes and the engineering team, every time there was a change in the infrastructure or the pipeline or the automation behind it with infrastructure as code, that would change everything. But it was Kubernetes at the time already.
CRAIG BOX: But it sounds like you've identified a lot of the problems that your customers were dealing with that led you then onto your next venture.
BRUNO ANDRADE: One of the things I saw was that there will always still be problems around the infrastructure space to be solved. Infrastructure is becoming a commodity around Kubernetes with the offerings from Google and the other cloud providers around kind of GKE and so on. As organizations, they start onboarding more developers and more applications into Kubernetes. They were going through the same struggles as we did — how do you, at scale, support multiple pipelines, multiple teams, multiple service requirements but are still able to evolve that infrastructure but without impacting the application lifecycle.
CRAIG BOX: Do you think that we can truly get to a world where the application developer doesn't need to know what's running underneath their code?
BRUNO ANDRADE: I'm sure we will get there. At least, history tells us that, right? An easy way to answer that, from my perspective, is do you really care what hypervisor Google is using behind their VMs? You don't. What type of storage and what type of load balance? You just don't see that.
CRAIG BOX: It is interesting, though, because you do care what the CPU architecture is underneath that. So there are layers that matter, and then there are some that maybe don't matter as much.
BRUNO ANDRADE: Exactly. I mean, for a majority of the applications, or most of the applications, you don't really care, right? Of course, if you're building very specialized applications, yeah, you may come in and say, I can only run on ARM and so on. But for a majority of the applications out there, and for the developers that are getting their applications in front of end-users — banks and so on — the underlying infrastructure implementation is just as detailed.
CRAIG BOX: What was the pitch to VCs for Shipa, and was it something that they were all just falling over themselves to get involved in the Kubernetes space that there wasn't a huge challenge in getting funding?
BRUNO ANDRADE: That's interesting. Some VCs, they truly believe in the space and that it has to mature a lot. Some other VCs, they are concerned about Kubernetes being the new OpenStack. That discussion is still going.
From a pitch perspective, we keep on pitching and the pitch is still around the same. You learn and you build up on more, but the pitch is still the same, that we can repeat what we've seen being a successful model out there, which is bringing an API that can help you focus on the right desired state and focus less on the underlying infrastructure implementation detail. So the pitch is still kind of around the same.
CRAIG BOX: How much work did you do before the launch? Did you have the platform ready to go when you launched in October 2020?
BRUNO ANDRADE: October 2020, we launched our first MVP. That was when we felt that we had enough software to go out there and share with the users in a broader scale and iterate and get feedback. The goal we had was to launch a SaaS control plane of the product, which we ultimately did in June 2021. So October 2020 was the launch of a broader MVP that allowed us to learn from users and get feedback and build up on that to fine-tune the roadmap.
CRAIG BOX: Not long after that, in November 2020, you announced Ketch, which is an open source tool. Was that part of the platform from the beginning, was the idea that you were going to build SaaS around this open source tool?
BRUNO ANDRADE: That was part of the strategy from the beginning. It's a broad need from a developer perspective. How can you consistently deploy your application or your images without focusing on the underlying Kubernetes infrastructure details? We thought it was a good opportunity to release the component that we were using internally to automatically create those Kubernetes manifests or objects that you need to run your applications and just give it all to the community. So that was part of the product internally and part of the goal to release that and try to help the community on having something that they can focus on the app instead.
CRAIG BOX: So at this stage, if I'm a WebSphere shop and I've got years of investment in this technology, and I know that I have to move to Kubernetes whether or not there is still going to be support for WebSphere, and I'm sure there will be support for it officially for many years to come, the writing is on the wall for the technology. How do I take Ketch or the Shipa platform and this mountain of code that I've acquired over years and move to the future?
BRUNO ANDRADE: From a Shipa and Ketch perspective, especially Shipa, we focus on containerized applications, your application that you at least have made the move to a containerized world. If you are at that stage, then it becomes a matter of what applications do you want to move now versus later. And then what type of infrastructure we'll need to support and users.
And then it becomes more of alignment inside the organization and the teams then anything else. But we focus on at this stage one, OK, now we have Docker. Our applications are being containerized. How do we move forward?
CRAIG BOX: Do you have suggestions or preferences as to how people who are starting from nothing and don't have a containerized application should get to that point?
BRUNO ANDRADE: They don't have a containerized application. If you don't have a containerized application, I think that opens up an opportunity for you and your teams to build using good patterns, right, so the 12-factor application. Start detaching your application from the underlying services. If you're at that stage, then I think it's a good opportunity for you to start implementing good engineering practices to allow you to move towards a cloud native approach that you will take benefit rather than just a lift and shift into a container.
CRAIG BOX: A number of other vendors in this space are looking at this as a holistic thing where they say, well, I'm going to take someone from keyboard all the way through to deployment. You're focusing, as you say, on the containers onwards. Was that a conscious choice?
BRUNO ANDRADE: It was a conscious choice. You've got to pick your battles, especially at an early stage. And we chose to focus on that space and moment in the market. There's always a space for everyone coming before that, but I think from a value perspective, we choose to focus on that space and narrow down and offer as much value as possible at that point in time for the organization.
CRAIG BOX: How much does it matter what the platform is underneath your tools? Are you responsible for figuring out what the system is underneath? Or do you just trust if there is some Kubernetes then you'll be able to do what you need to do?
BRUNO ANDRADE: From a Shipa perspective, we chose to focus on Kubernetes as the choice of the underlying infrastructure. So for teams that have decided to containerize their applications and are going towards Kubernetes infrastructure, that's where we come in. One of the things that we try to detach ourselves from is the underlying components choices.
Yes, you decided to implement Kubernetes. But from a Shipa perspective, we try to take you away from the cluster version that you're using, or if you choose to use Istio versus Traffic, for example for ingress controllers. So we give you the opportunity — or the DevOps or platform engineering teams — to choose the components they want to build their infrastructure, but without impacting the developer experience. And we think that's where the really powerful opportunity is because it allows you to keep on evolving your infrastructure without changing how developers manage the lifecycle of their applications.
CRAIG BOX: So you see this firmly as a tool that a platform engineering or an SRE team would manage and provide to application developers.
BRUNO ANDRADE: That's exactly all the calls we have with all the users. And it's great because we come in at a time where the experience with Kubernetes, now they are growing the number of developers, they are growing the number of applications, and they get to the point that we have multiple pipelines. We have multiple clusters. We have multiple components.
And now, as we on-board more things, it's becoming a nightmare. And post-deployments, the developers are having a huge, hard time trying to understand and support their own applications. That's when we come in and that's where we add more value.
CRAIG BOX: What did you learn from your MVP and how have you changed the product since then?
BRUNO ANDRADE: Especially around how folks want to build their infrastructure, detach the application as much as possible, not only from, kind of, yeah, you want to use Kubernetes. But different teams, they want to use different components for different reasons. Some teams, they really love Istio, for example, for the ingress controller because it fits their business very well.
Some other teams, they don't care. They want to use NGINX. But for them, using a specific cloud provider and a specific cluster version really matters.
So what we learned in the beginning was that allowing the DevOps team to really build the pieces that they want to build, put that together, and scale that and experiment with new pipelines and new infrastructure as code tools and move fast was critical. I think that was the biggest learning from the MVP.
CRAIG BOX: It sounds like you're saying that the choice of the components really does matter to this team, and as long as the application team don't have to worry about whether it's Istio or NGINX, you're OK.
BRUNO ANDRADE: Exactly. Because infrastructure is going to keep on evolving. We started focusing on the MVP. We started looking at Terraform and kind of the very standard enterprise stack I would say.
But we quickly learned that folks were then experimenting with Argo CD, some others with Flux, and GitHub actions was coming up pretty strong. Now you have folks like Pulumi.
If you don't tame and build an application layer that brings a standardization on how this is all consumed, this becomes a big problem for you to solve and be able to tame at scale. And as you scale right now, if you don't solve it now, it will just keep getting worse over time.
CRAIG BOX: Is PaaS a four-letter word? Is this something that you want to think of yourself, as being a PaaS?
BRUNO ANDRADE: We try not to see ourselves as a PaaS, because folks are building PaaS internally. PaaS, in many times and historically, takes away the opportunity for you to say, for this project, I want to use GKE 1.19. For this other project, 1.22. I want to use Ingress this, and Prometheus here and there. So PaaSes takes off the ability from you to do that. Other things is that we try to build what you need, but kind of build that abstraction layer. We don't take over your entire cluster. That was another thing that we learned from the MVP as well. A lot of PaaSes, they have the tendency of dominating your cluster. You can actually tie Shipa's API to a specific namespace in your cluster. So you can keep on running everything that you need. And for a specific project, you can actually go to a namespace and off you go.
CRAIG BOX: Does Shipa's involvement end with the software being rolled out? Or is there an observability piece to it, as well?
BRUNO ANDRADE: We break down the Shipa API into two main modules. One is the application definition. The second one is the policy definition. And there is a third component that is a developer portal, which is doing nothing else than consuming our own APIs. So we are our own customers that we give users. That developer portal has integrations into Prometheus, or you can plug it into Datadog, New Relic, or others, and give developers or application owners insight into what's happening directly from there without having to dig deep into your APM tool to find what's wrong.
CRAIG BOX: Now you mentioned policy there. That's something that is front and center on Shipa's website today. It feels like that's something where you can easily make a sell to an enterprise where they are very keen on making sure that things aren't deployed in a way that is inconsistent with their requirements. How have you seen that pick up over time? Is that something that the tools are now possible for and it's possible to lead with? Or is that something that wasn't necessary before, and people taking Kubernetes more seriously and policy is higher on the list of concerns?
BRUNO ANDRADE: I don't think that it's that people didn't take it seriously before. It's that I think folks are learning as they go. And now that they are scaling their applications and consumers in that platform, they're learning which knobs to actually turn and buttons to press.
One of the interesting things is that there are a lot of talks about Kubernetes and so on, but very, very few organizations are running Kubernetes at scale that they really know inside out what's happening. They're in the beginning of their journey in most places. One of the things that they learn is that policies is important, but putting together policies in a workflow format, in a structured format, it's really hard because policy encompasses security scanning, encompasses registry controls, network policy. There are so many components involved in a cloud native application security.
And now that they're learning this, they learn, as well, that they get impacted by the underlying infrastructure choices. Again, API versions, cluster providers, pipelines, and so on. So bringing that structured approach and workflow approach to protecting your cloud native applications, it's a complex topic, to be honest.
CRAIG BOX: Is this powered by Open Policy Agent? Have you had to build out your own integrations to be able to support a single policy mechanism?
BRUNO ANDRADE: We have not based ourselves into OPA, and that's the interesting thing because we thought there was an opportunity to bring policies to a higher definition level. If you look at how Shipa policies are defined today, it's a very quote, unquote "English way." You can define policies without knowing or caring what underlying API versions using, again, Ingress and registry controls and so on.
And you can grab that policy definition, the exact definition can be used by Pulumi, Terraform, Argo CD, and so many others. And it can be connected and apply to any cluster, any API version, regardless. So now you have a scalable model that you don't have to build complex security definitions and programming, for example. So we decided to take a different approach with that.
CRAIG BOX: That English way you talk about — hanging on in quiet desperation, perhaps — is that something that makes it easier for people to understand and express business requirements?
BRUNO ANDRADE: Yes. It's no secret that everyone is having a hard time hiring and onboarding people into new or existing projects, right? And one of the things that we hear day in day out is that as they bring both developers and DevOps folks into the team, having them grasp and understand exactly what's happening and be able to pick this up and running and scale that model is challenging.
If you depend on your DevOps person to learn what API version, learn Kubernetes, learn Rego, learn so many components, by the time he or she learns everything, they will probably be gone. And then it will be a new person coming on board, and off you go starting from ground zero again. So having that English definition that, again, you can move all the pipeline, you can move different infrastructure as code tools, you can move clusters. But it's easy for anyone to come on board, understand, and keep on replicating and scaling that model makes a huge difference.
CRAIG BOX: Are you able to quantify how much easier it is to manage an application via Shipa or a method similar to this than to the assembly language YAML approach?
BRUNO ANDRADE: That's part of what we do from a return of investment with so many users today. And I think launching the SaaS version gave us the opportunity to onboard so many users, talk to them and learn from them. One of the things we learn is that approximately 40% of the DevOps time is actually trying to build abstractions on top of their clusters so developers can consume it.
The problem is not only building the abstractions, but maintaining and evolving that abstraction that allows it to again evolve your infrastructure. It's consistent, around 40% of the time of the DevOps team, try to build and maintain that thing, which, if you think about users that are using Shipa today, they have DevOps teams of 50 plus people. If you take 40% of their time trying to do that, it's a huge impact in the organization.
CRAIG BOX: I remember speaking to someone in Brazil, of all places, who basically said that he was spending around 40% of his time managing the lifecycle of his Kubernetes clusters before there was GKE, for example. Do you think that we'll just keep moving up the layers and eventually people will say, all right, I don't care about running GKE. I'm going to pick an application platform in a SaaS format, and then I'll start thinking about how I can abstract away the next piece — pay people to write my code for me, whatever comes next.
BRUNO ANDRADE: I think we may get there. I think the current stage and maybe for the next couple of years the choice of the infrastructure provider, such as GKE, may matter, especially because you're picking different data centers and where you want to run your nodes. We're still in a world that we've got to be very careful in terms of where we're putting our data and our applications and where they're running. So I think from that perspective, it will still be important for many users to choose where their applications are going to be hosted.
But back to your 40% case, where do you want to rather focus your time? Do you prefer to focus your time on governance and creating quick environments that developers can deploy and manage their own applications? And developers managing their own applications is a whole different topic and complexity by itself, rather than kind of building Kubernetes clusters.
CRAIG BOX: If you had 40% of your time freed up today, would you use it grooming your dog?
BRUNO ANDRADE: No, I would pick up guitar again. My dog is a lost cause at this point.
CRAIG BOX: You have a Bernese Mountain Dog, and you can see a picture in the show notes of exactly how hairy a Bernese Mountain Dog can be.
BRUNO ANDRADE: He's awesome. If you're looking for a dog that is a great fit for a family. I've always had dogs, but first time I have a Bernese Mountain, they are amazing. But boy, they shed like crazy. So you get the pros and cons, but it's an amazing breed. I definitely recommended it.
CRAIG BOX: What do you think needs to be built into Shipa next? What problem, in the conversations you have with your customers, is the next thing that they are needing to be abstracted away from there?
BRUNO ANDRADE: Those conversations are difficult, right? Because many times when you jump first time in a conversation, everyone has the tendency to think that their requirements are unique and that's why they want to build everything from scratch. That's an old tale, that you are unique and no one or no tool can solve your problems. But as you start digging deep into the requirements, they learn — and we all learn — that they have their uniqueness point, but the majority of it is just common sense that they want to address that other users are addressing as well.
From a Shipa perspective, this will keep on moving because we're in the early stage and we're learning from users and we're going to adapt to make sure that we always keep adding value. But now we're at a point that we understand how your applications are consuming the underlying infrastructure. How your developers are deploying. What is using resources that should not be used. What type of applications are actually not complying with the policies that you have? We're looking at building a lot more in terms of the intelligence around how the system reports back what's violating policies, what should be improved, and acting on top of that.
CRAIG BOX: I've heard of various different teams working on what I would loosely call an application object within Kubernetes, whether it be a CID or an application set in the case of Argo, for example. Is there a standardization that could happen here? Is there talk between all the vendors who are working at the application layer on having one thing where you can effectively use different sets of tools to deploy?
BRUNO ANDRADE: That's such a great point. We really hope so, and that's what we strive for here at Shipa. And if you look at the way we're exposing that application layer today, is through an application API. That application API today — we've worked together with folks from HashiCorp, for example, and plugged it in into Terraform — that same definition is plugged into Pulumi. The same definition is plugged into Crossplane.
If you have an opportunity to look at how we're defining that, we're really pushing towards that and we're doing our best effort from a Shipa perspective to expose that application model to the different infrastructure-as-code and pipelines out there. That goes to our goal of detaching the application from the infrastructure. So you can adopt GitHub Actions. You can adopt Pulumi for another project, or Argo for another project. But if you bring in a new developer or a new DevOps, he can read those definitions — it's English focused — and he can plug those definitions into anywhere.
CRAIG BOX: What's the one thing that you'd most like to see change about Kubernetes?
BRUNO ANDRADE: Kubernetes is super-powerful. It's a lot more than what most people need. I think the change doesn't have to come from a Kubernetes perspective, but how people perceive value. I think that's the biggest change that we want to see, and we're slowly seeing those changes.
We recently had our webinar with a customer. One of the topics was the perception of value. As folks, they focus more on how can I get my applications in front of users faster, then we're going to start focusing more on other aspects of the business, rather than the Kubernetes cluster itself and its functionalities.
CRAIG BOX: But in terms of someone who has to sit on top of that, is there a piece that you say, hey, I really would like to be able to change that part of the way Kubernetes works? Sidecar container ordering support, perhaps?
BRUNO ANDRADE: [LAUGHS] One of the things that I struggle with — and it's mainly because of my background in an enterprise perspective, coming from WebSphere and then Oracle later — while it is great to see new releases coming out so often because bugs are solved and functionalities are improved and so on, for someone who is building on top of Kubernetes, you've got to be on top of your game because APIs change. Things can break apart. Coping with that speed of change and release is always a challenge that I see. That I would like to see maybe more space, something around that that would facilitate folks to stabilize on top of something.
CRAIG BOX: All right, thank you very much for joining us today, Bruno.
BRUNO ANDRADE: It was a pleasure. I appreciate the opportunity.
CRAIG BOX: You can find Bruno on Twitter @bandradeto, and you can find Shipa at @shipa.io.
CRAIG BOX: Thank you, as always, for listening. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, please tweet us @kubernetespod or reach us by email at email@example.com.
You can also check out the website, kubernetespodcast.com, where we list our transcripts and show notes, as well as links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy this show. Thanks for listening, and we'll be back next week.