#124 October 6, 2020

Kubecost, with Webb Brown

Hosts: Craig Box, Adam Glick

When your infrastructure is effectively infinite, you may have to keep an eye on your credit card. Webb Brown started a project that does exactly that - Kubecost, which aims to reduce spend and prevent resource-based outages. He talks to Craig and Adam about the project and the company behind it.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.

ADAM GLICK: And I'm Adam Glick.

[MUSIC PLAYING]

CRAIG BOX: I voted today. I wasn't even supposed to!

ADAM GLICK: Congratulations. Where did you vote?

CRAIG BOX: I voted from the comfort of my own house. I voted from halfway around the world from the election that I voted in. It's all made very easy in New Zealand.

We have a two-week early voting window, where you can just log onto the interweb, download your voting paper, fill it out, have someone sign a form that says you look like the kind of honest person while you did it, and then you take a picture of it, and send it back. And you're done.

ADAM GLICK: Do they have the same kind of get out to vote campaign that we see in other countries?

CRAIG BOX: They're actually encouraging people to get out to vote, as opposed to the opposite, which you're seeing in certain other countries. But I will say that a couple of fun things. The first gentleman of New Zealand-- New Zealand obviously has a female prime minister. So her partner, Clarke, was out on Twitter, encouraging people to get out and vote early.

And the leaders of the main political parties have gone out. You can actually go to most polling stations in New Zealand for up to two weeks before the election. And they say, last year, about half of the country voted beforehand. And with the COVID, I think it will be a lot more this time.

But they have also pointed out that of the 5 million people that live in New Zealand, there are actually a million other New Zealanders living outside the country. And so there is a campaign with a video, which you'll find linked in the show notes, encouraging foreigners to meddle in the New Zealand election, just by casting their vote, one at a time.

ADAM GLICK: I have to say that I always appreciate the humor in some of these ads.

CRAIG BOX: We're a very funny people. What's new in the gaming world?

ADAM GLICK: On my side, I got a chance to check out a game some people may have seen as a arcade machine, called "Killer Queen." And there is a home version on Steam and on Switch called "Killer Queen Black," which is essentially the same game, but you can play it over the network.

And it is just an incredibly wonderful throwback for those of us that remember having pockets full of quarters, plunking them into arcade machines, and playing these games. It's got 8-bit style graphics, but the game is both easy to pick up and play. It's very short in terms of its time frame.

But it's incredibly social. There's eight people that play at a time on the game. And it has this just kind of neat mechanic of merging together-- I guess you'd say it's kind of a race game, a fighting game, and "Joust," if people remember the "Joust" game. And it just comes together really nicely. So I've been enjoying kind of going online and playing that.

CRAIG BOX: Is it fastidious and precise? Dynamite with a laser beam? Any of these things?

ADAM GLICK: I feel like you're making a reference to the song "Killer Queen."

CRAIG BOX: Yes, I watched the video, and I'm like, that soundtrack, they don't agree with each other. The name and the sound, that's not what I'm expecting to hear.

ADAM GLICK: It does have a nice metal soundtrack that goes with it, too, for those that appreciate a good distorted guitar riff.

CRAIG BOX: Yes, it just needs a little tinge more Brian May, and it'd be perfect.

ADAM GLICK: [LAUGHS] Shall we get to the news?

CRAIG BOX: Let's get to the news.

[MUSIC PLAYING]

ADAM GLICK: VMware held a virtual VMworld conference last week with a number of cloud native announcements. Tanzu will be supported on VMware on AWS, with upcoming integrations announced for Azure, Google Cloud, and Oracle Cloud.

VMware container networking with Antrea is a commercial packaging of their open source project, Antrea, with new integrations with the Istio-based Tanzu service mesh. They also announced their intention to buy SaltStack, perhaps, coincidentally, the automation tool used to deploy very, very early versions of Kubernetes.

One final note, we would like to offer our congratulations to Bryan Liles, our guest in episode 54, for promotion to principal engineer in VMWare's telco group.

CRAIG BOX: Pixie Labs emerged from Stealth with Pixie, a Kubernetes observability platform. The no instrumentation, data collection platform uses EBPF to collect metrics, traces, logs, and events, which can be augmented with existing telemetry. You can then run script-based analysis or use ML models, all running in your own cluster.

There's a free forever community version and pricing into the six figures for enterprise use. It's now in public beta. Pixie Labs also announced just over $9 million in Series A funding, led by Benchmark.

ADAM GLICK: Jeremy Herzog has released Cicada, an integration testing framework for containerized applications. The tool was built because Jeremy was feeling the gap between unit tests that test small portions of code and the potential bugs that can surface when you put an entire application or service together. Cicada is open source, and Jeremy is working on the ability to run the Kubernetes testing tool from within Kubernetes itself.

CRAIG BOX: Amazon launched a cloud developer kit for Kubernetes, or cdk8s, in May with support for JavaScript and Python. This week, they announced they have added Java, the most commonly used language at the bookshop, so you can now use factories to create deployments.

ADAM GLICK: Good and bad news from the Envoy project this week. The good news is that the first alpha release of Envoy for Windows is out. The cloud native proxy now compiles and runs on the platform. And new code commits must pass Windows tests in the CI process.

A dedicated group of developers is working hard on the Windows effort, but they do call out that Envoy Windows is neither suitable nor supported for production workloads at this time. Service mesh support requires changes in Windows itself. And the Envoy team hopes to enable this in an upcoming version.

The bad news is a security vulnerability with a high severity was discovered due to a bad header parsing. A new version of Envoy and Istio are out with the fix.

CRAIG BOX: In January, we told you about a new project called Kubenav for mobile and desktop and how it was moving fast. It moved so fast that version 3.0 is now out, supporting multiple providers with infinite scrolling with the resources deployed on them. It also adds a Prometheus plugin and support for port forwarding.

ADAM GLICK: Another week, another cloud native security vendor is plucked up. This time, Cisco has announced an agreement to buy Portshift for a reported $100 million. Portshift focused on cloud native security for Kubernetes and service meshes using an agentless model. It will be folded into Cisco's emerging technology and innovation group. And the deal is expected to close in the first half of 2021.

CRAIG BOX: Backup vendor Veeam has gotten in on the acquisition and cloud native game this week, dropping $150 million to purchase Kasten, with a K. Veeam was looking to expand beyond service in VMs and identified Kasten as the only company which they saw with an application centric approach to contain a backup. The acquisition will remain a separate business unit. And Kasten's K10 platform will remain available.

ADAM GLICK: Finally, in rounding out the money news, Solo.io has raised $23 million in a Series B funding round led by Redpoint Ventures and True Ventures. Solo is most known for their API gateway Gloo, built on Envoy, and other service mesh related tooling. To learn more about Solo.io, check out Episode 55 with their CEO, Idit Levine.

CRAIG BOX: And that's the news.

[MUSIC PLAYING]

ADAM GLICK: Webb Brown is the co-founder and CEO of Stackwatch, the company behind the Kubecost project. Welcome to the show, Webb.

WEBB BROWN: Thank you so much, Adam. Great to be here. Really appreciate you having me today.

ADAM GLICK: You used to work in finance and private equity. Is it true that they say you can take the dev out of fintech, but you can't take the fintech out of a dev?

WEBB BROWN: [LAUGHS] It very well may be true. For me, since getting exposure to both of these, I've not been able to separate either. Both are passions and just things that I really, really enjoy. And so, for a long time, they've been closely intertwined. I've been unable to separate them so far.

CRAIG BOX: What's a Kubecost? Is it like a piecost?

WEBB BROWN: What is a piecost exactly?

CRAIG BOX: About $4.

ADAM GLICK: Ba dum tss.

WEBB BROWN: There are some similarities, but it's a little different. Kubecost is a project started by our team several years ago, aimed at helping teams get visibility into their spend and resource consumption in Kubernetes environments.

ADAM GLICK: Why did you build Kubecost?

WEBB BROWN: I think what we saw, myself and my co-founder were working on infrastructure monitoring at Google. I was working on kind of internal tools, as well as tools for developers. And he was working on cloud products, specifically working on billing systems.

And what we saw was that when teams move to containerization, as well as container orchestration, it was just all this new complexity and all this change, right? So new tools, new abstractions, and also just decisions getting made in a different way.

So we saw a real opportunity to help teams here. And that oftentimes just started with basic visibility, right? So it kind of unlocked this black box of cost visibility in Kubernetes. So that's the first thing.

And then we saw an opportunity to really kind of rethink how cost visibility and cost management tools were built, so specifically just trying to build a great experience, really the experience we would want as developers. And that ease of use, that's like owning and controlling your own data, all these things that were just really important to us, we thought that Kubernetes as a platform and all of the tooling around it could enable.

CRAIG BOX: You were working at the company at the time who has at least a passing familiarity with Kubernetes. Did you consider taking this idea to a team inside Google?

WEBB BROWN: I think Ajay, my co-founder, and I, our CTO, always felt like we were kind of destined to go and build something as a standalone business. My father was an entrepreneur growing up and always felt that kind of deep in my bones, and really just felt like we could kind of go have the biggest impact by going and kind of doing this as a first standalone project.

This truly started with an open source project first, just to experiment and see if we could help teams, and then eventually, kind of building a company around it, where we would raise money and put an amazing team to go and help teams of all shapes and sizes with this problem.

CRAIG BOX: What did your experience on observability at Google bring to the idea of bringing this out as an open source project?

WEBB BROWN: I think, first and foremost, it just gave us insight into how complex this problem and kind of related problems really are. And for us, it kind of taught us that cost is not an isolated variable when you're looking at it in the infrastructure layer, right?

And we can talk more details about it, but anytime you're dialing cost, whether it's changing instance types or moving from on-demand to spot or preemptible, all of these have not just cost impacts, but they also have potential performance, reliability impacts, et cetera. So it taught us just the complexity and kind of the interconnectedness of all these variables. I think that's one.

And then, two, I think just really empowered us and taught us how to build really large scale applications for developers, right, that can scale to ingesting terabytes and terabytes of data, and really just given us the confidence and conviction after having kind of gone through that process multiple times within Google.

ADAM GLICK: You just mentioned that cost was related to reliability and performance. What are the relationships that you see there?

WEBB BROWN: We view it as a really important one. We think about this as a balance in terms of finding the right mix of benefits gained in terms of cost savings with the expected marginal cost in terms of downtime or performance degradation.

And just to pick a quick example, that when teams consider this move from on-demand instances to spot or preemptible instances, on the surface, it can very quickly say, wow, 80% or 90% cost savings, that sounds amazing. Let's do that.

And then you realize that the way your applications are architected, as well as the way your infrastructure layer is configured, et cetera, can have a major potentially negative impact on not just uptime, but also reliability in general. So that's just kind of one dimension, where, again, our perspective is, you can get in real trouble just kind of looking at the cost side of the equation.

And that plays out with many different decisions, whether it's setting requests and limits or the long tail things you have to think about when moving to Kubernetes.

ADAM GLICK: The world of finance runs on a lot of closed information and closed software. But you made a decision to buck the trend a little bit and make Kubecost open sourced. Why?

WEBB BROWN: Yeah, it's something that's deeply important to us. I think it's just very much grounded in we want to help as many people as possible. And we think that exposing and building free and open source software is the best way to do that.

And so, for us, helping as many people as possible is helping teams of all shapes and sizes. And that's from smaller startups to really large enterprises. And it's also teams that have different data requirements so that we think, again, the benefits of open source around transparency, control, et cetera, can be really appealing to a lot of teams that have those very specific constraints.

So it's super important to us. And again, at the end of the day, it's just really about helping as many teams, and specifically kind of engineering teams, as possible.

CRAIG BOX: So you've built a great open source project. How does that become a commercial product of a company that you founded?

WEBB BROWN: In the short term, we can help tons of people with open source. But our view is that to help a lot more people in the long term, we need to build a sustainable company around that. And here, we're actually informed by, and kind of really guided by, the team at GitLab.

Actually, I had the chance to meet Sid about six months before starting the company. And the way he thinks about-- and Sid being that CEO at GitLab. The way he thinks about kind of exposing enterprise or paid features specifically for larger companies, teams with 100 engineers or more, teams that have directors and vice presidents, et cetera--

CRAIG BOX: Purchasing departments.

WEBB BROWN: Exactly, purchasing departments and all the features that kind of come with that scale and complexity, that's the kind of framework that we use to really identify paid features that would be above and beyond our community version or open source version.

CRAIG BOX: Are there certain paid features that you can only get if you apply it by fax?

WEBB BROWN: [LAUGHS] We haven't yet had anyone go that path. So I think we're going to kind of figure it out the first time that happens. But we may have to introduce some, if and when we cross that bridge.

CRAIG BOX: What are the features that are not in the open source version?

WEBB BROWN: Some of those kind of enterprise grade things, like SAML and RBAC within our product, whereas you may want to kind of limit users to read-only or only give access to certain namespaces, et cetera, that's kind of one area.

A second area is around our durable storage. So as companies get larger and more complex and they have chargeback programs, they often want to have unlimited metric retention and a kind of a unified place to view all of their infrastructure. And so we do that built on top of a bunch of open source tooling, like Thanos or Cortex, et cetera.

And then the third is kind of us. We work really closely with teams during onboarding and helping them roll out this data to their teams, sharing best practices, et cetera, et cetera.

ADAM GLICK: Let's drill into the product a little bit. Can Kubecost look at not only what my costs are, but actually change my settings and optimize my costs?

WEBB BROWN: Out of the box, Kubecost provides two things. So first is just this visibility component, right? So we can help you see costs in Kubernetes and then externally managed services that Kubernetes tenants are using.

And just to give a couple examples, that can be breaking down costs by any native Kubernetes concept from namespace, controller, job, even down to the individual pod or container level. And it can also be internal business concept, so cost by team or application or department or cost center, things like that. So that's kind of part 1.

And then part 2 is we give you insights, right? So we identify over-provisioned or abandoned workloads or orphan resources, all of these different insights in our product. And then we do have some optional automation that can be enabled. Our most prominent example is around a notion of cluster turndown, where you can scale down, say, dev or staging clusters on nights and weekends.

It's totally optional by default. We take read-only Kubernetes privileges. And that will, for the foreseeable future, always be true. But we do expect to enable just more and more of this automation. And we actually have a number of APIs that teams already use to do their own automation.

And again, that's where we think just open source and open invisible documentation can really help teams take our product and do even more, and then ultimately help us kind of guide what we can build as we continue to expand the product.

ADAM GLICK: How far does your analysis go? Are you looking at just, say, instance types within a particular cloud, or are you taking a look at things that are available, kind of on-demand spot type pieces or preemptables? Are you looking at things between clouds? Are you looking at costs on-prem? How far does the analysis go?

WEBB BROWN: We go really deep in Kubernetes. So you can think about it is, we look at where you are running your particular infrastructure, whether that's GCP, AWS, on-prem. And then based on that, we go really deep integrating with the Kubernetes Scheduler, cAdvisor, et cetera, and yield all these insights.

You actually can use our product to make some of this analysis across different clouds. But we do focus most of the effort in saying that you are in GCP, and here's how you tune your applications and your infrastructure to find that right balance of cost savings and maximizing uptime performance, et cetera.

ADAM GLICK: Do you look at other factors besides just the compute instances, things like the storage that people are using, the network, other parts of available infrastructure that lend themselves to the overall cost that people look at?

WEBB BROWN: Absolutely. We can kind of break this down into two pieces. So first is in cluster. We'd be looking at the storage you're using and how effectively you're utilizing that. We'd be looking at consumption of GPU.

We'd be looking even at network ingress with integration with your kernel to actually see what pods are sending cross-AZ or cross-region or internet egress from within your cluster. So for the most part, any resource that's being consumed in Kubernetes we'd give you visibility there.

And then the other part of this is we'd have an integration with your cloud providers, like billing data. For example, AWS or GCP, we'd be able to tell you that, say, you're using an external service like cloud SQL or BigQuery, et cetera. And then you can actually map that back to whatever Kubernetes tenant or set of Kubernetes tenants are using that.

So if that's a database that's used by, say, a single deployment or is used by several namespaces, you can kind of do that allocation so you can have not only the visibility in your cluster, but kind of the complete picture of spend outside of the cluster as well.

CRAIG BOX: Do you find that the costs of cloud resources ever change? Well, sometimes a provider, obviously, will generally lower a cost, but the idea, at least in the initial early days of the cloud, was people saying, oh, well, it'll be a lot cheaper in this particular zone at this time. And I think spot instances are rather as close as we've got to that. Do you need to get constant updates on pricing from vendors in order to be able to make decisions on the way to run things?

WEBB BROWN: I think the extreme is just what you said, right? The spot market, which is generally very dynamic, and there can be real opportunities to shift compute and kind of take advantage of kind of market fluctuations, especially for jobs and workloads that are more batch oriented or less time sensitive.

But then when it comes to just general compute monitoring, we see this as really a function of kind of that internal requirement. Oftentimes, if it's an engineering team looking for kind of big wins for cost savings, yeah, it may be less important to be perfectly precise with up to the minute prices.

But if it's a team that's doing internal chargeback and you're actually sending that data to a finance team and you're going to be billing different engineering groups, well, then, all of a sudden, that becomes much more important.

And while on-demand prices aren't fluctuating that often, inevitably, the price of running a cluster does change. And for teams that are, again, doing something like chargeback, that's very important to have that very precise.

ADAM GLICK: Does the model take into account what people might have as negotiated rates versus what are publicly posted rates?

WEBB BROWN: Yeah, absolutely. So the way our product works is, we talk to the Kubernetes API, Kubernetes scheduler, et cetera. And we use kind of on-demand list prices to give you real-time information of your data. And then as soon as your billing data becomes available from a cloud provider, we would reconcile to that cloud provider's posting of your actual costs for, say, a particular instance.

And what that gives is, in our opinion, the best of both worlds, which is you get real-time data that is, oftentimes, very close to accurate. And there are ways that you can tune that to make it really close. And then you get the perfectly precise data once that cloud provider data does become available.

So that is, in our opinion, the best of both worlds. And then also, for things like Enterprise discounts, everything that would capture those kind of differences from list prices. And again, kind of give you that perfect reconciliation, whether you're using RIs or savings plans or preemptible spot, enterprise discounts-- you name it.

ADAM GLICK: Webb, you talked about the costs, and we've talked a lot about the cloud part of that, where you can pull from APIs. But you also work with on-prem resources. How do you figure out the costing for resources on-prem?

WEBB BROWN: We have actually two different paths in our product for doing that on-prem, or even air gapped environments. And the first is really simple. Just provide us a kind of cost of each resource in your environment. So what is the hourly or minutely cost of a CPU, a GPU, a gigabyte of storage, et cetera?

And then we have a more advanced pipeline, where you can actually come and say, each individual asset has a very specific price associated with it. And there, you would provide us a CSV list of all of the assets in your environment. And those can change dynamically if you add new instances to your cluster or remove them. But you could have that kind of very fine-grained, precise costing on a per asset basis.

ADAM GLICK: I'm going to put on my info manager hat for a second here and ask, how do you price in things like risk? So if you're putting something, say, on a preemptible or a spot instance that could go away at anytime versus a regular instance where you can run it as long as you like, how is that factored into the model?

WEBB BROWN: Yeah, and again, this gets back to costs and costs alone is not the only variable that you can think about, right? And the short answer is, there's a lot of complexity here. But if you look at, say, our kind of cluster rightsizing in size, for example, we think that context is deeply important, right?

And one example would be that if I'm running a couple of dev workloads on my own personal dev cluster, I may be actually kind of comfortable with maybe only two nines of uptime, right? If I can save, say, 70% or 80% of my costs, that may be a totally reasonable tradeoff for me in that situation.

But for my HA workloads, that's totally unacceptable. I'm very comfortable provisioning compute or memory, et cetera, to meet my P99, where I also have, say, 50% headroom on top of that.

So our insights would kind of take that context into account and, again, help you provide some additional metadata about your workloads to say that this is a dev workload, or this is a prod workload, or this is an HA cluster. And then, all of our recommendations and insights would be driven from that kind of contextually aware information.

CRAIG BOX: In the context of physical machines that do things like generate heat, Google and DeepMind, another Alphabet company, have done research in the past where they've used machine learning models in order to figure out how to run data centers at lower power and lower heat output. Is there an equivalent to this to Kubernetes running on VMs, and is this something that you look at in your model?

WEBB BROWN: I remember, at Google, being just fascinated by both the research that those teams did, as well as the kind of impact in gains they have. I think you're seeing more and more teams start to look at automation here, specifically automation making infrastructure level decisions.

And I think the most prominent use case that we see is around either cluster autoscaling or another type of autoscaling. And our perspective is that there are real wins to be had here, but that teams should proceed really cautiously, right? And it's just because of the relationship we were just talking about, where if things do go wrong, it can be really expensive if this is in a production environment.

And I think generally, we see teams kind of take that approach, right, of steering more towards open source solutions, where there is visibility and they can understand what is actually happening in their environments. And then, two, proceeding cautiously, where they're testing very closely in staging clusters, et cetera, and really kind of taking their time before rolling out to a production environment, where the cost of an error could be high.

CRAIG BOX: A long time ago, we spoke to Karl Stoney from AutoTrader, who had built a system where they were looking at the cost of running not only the resources that ran inside a Kubernetes namespace, but also the network requests that were made to those resources using Istio, and then being able to say the business cost of this API call, on the whole, is this much.

There are this many API calls to this endpoint and multiplied out by the cost and so on. That kind of business metric sounds like it might be useful to a lot of teams inside companies who are using a tool like Kubecost. Are you able to take in external signals like that?

WEBB BROWN: We are. So yeah, we'd have that network visibility in our product and also have an integration with the SMI spec to show data specifically from service meshes. I think this kind of hits on two points, where it's I think are incredibly powerful. One is, just like you mentioned, having the total cost of an application, right? The fully loaded cost of running workloads can be really powerful.

And then, two is actually normalizing that data with business metrics that are relevant for your company. So we have teams that use our product to say, here is the cost of a single API request, or here is the cost of a user, for example.

And once you're able to frame cost in that light, you can make really informed decisions, again, about the tradeoff between an extra, say, the cost of another 100 milliseconds drop in response times versus kind of the product or business impact to your users or API calls, et cetera.

So again, that, to us, is the holy grail of having these decisions at the infrastructure layer incorporated into your overall business decision making process.

ADAM GLICK: Who's the target user for Kubecost? I'm picturing that there's somewhere between someone who's managing servers and someone who's got a green visor sitting over accounting books, scribbling into ledgers.

WEBB BROWN: Yeah, I mean, we definitely work with users in both camps there. But when we built this company, we really started with the developer and the DevOps or the infrastructure engineer in mind. We fundamentally believe that there is a big gap in tooling built for them in this market, one.

And then, two, we truly see Kubernetes as really just a big developer enablement platform, right? Where you really are empowering developers to do so much on their own and to have so much more control and visibility.

So that's very much our starting point. We, again, want to make tools that are really easy to use, easy to install, easy to understand, access, et cetera. And we think that by doing the right thing for the developer and building a great experience there, just leads to all kinds of good things, right? It leads to accurate and available information given to finance and accounting and executive teams, all of these stakeholders where this information is really important.

But it's ultimately what we see is a lot of time, it's, again, these DevOps teams or these engineering teams that are actually making the decisions at the end of the day that are really going to kind of move the needle in terms of cost efficiency and performance reliability, et cetera.

ADAM GLICK: I hear lots of different words using the ops post-fix these days of, there's DevOps, there's DevSecOps, DevSecAIOps. And recently, the Linux Foundation announced the intent to create a FinOps Foundation. And that was back in June. In August, the foundation was announced, and Kubecost was a founding member. What is FinOps, and what is the FinOps Foundation?

WEBB BROWN: FinOps is this discipline focused around cloud or compute financial management. And it's really kind of aiming to bring together this multi-disciplinary approach, just like we were talking about, where engineering is involved, product and business teams are involved, and finance teams are involved to find the perfect nirvana to balance all the needs of an organization.

And I think it's really early days for FinOps, and I think you still see it called a lot of different things, whether it's cloud financial management or anything else. But I think it's really about this, taking this kind of holistic approach and realizing that to really get that right balance, there's multiple stakeholders that do need to be involved in some of these higher level decisions to balance all of these things we've been discussing.

CRAIG BOX: There's a lot of talk in the DevOps world about the concept of shifting left. And it seems to me that tooling like yours will allow people who are building and deploying software to have a more realistic idea at that time of what it will actually cost to run it.

Do you see this as a new trend? Do you think that in the past, developers have just built something, and maybe later on, they'll come back and look at how we can make it cheaper? Do you people are going to build that into their upfront designs?

WEBB BROWN: Yeah, I think absolutely. With giving more and more developers real-time and near real-time data, as well as predictive tools, I think it's just becoming much easier to have that visibility before a workload is even launched, right, or as soon as it is launched.

And a big part of this is, again, because now with Kubernetes, you have these APIs where you can truly have real-time data. You don't have to wait till billing data becomes available maybe 24 hours, or in some cases, multiple days later. You can truly be making these engineering and kind of tuning decisions in real-time.

So I think that is incredibly empowering. And I think it will enable more and more developers to make these optimal decisions in much shorter times, oftentimes even before workload is even launched.

CRAIG BOX: Your company launched as Kubecost, but it's now becoming Stackwatch. Is this a rebrand, or is this a hint at the expansion of the product line?

WEBB BROWN: The Kubecost project actually launched before we had a company, right? So it started as an open source project. We truly just believe deeply that we wanted to start with an experiment to see if we could help teams, right? And if we could help teams, then we decided we would go and build a company around that project.

So Kubecost is that kind of project name, which is really kind of meant to be more standalone as time goes on. And then Stackwatch is our actual company name that will, over time, be kind of more and more separated from the Kubecost project itself.

ADAM GLICK: What comes next for Kubecost and Stackwatch?

WEBB BROWN: It's really about, for us, staying super focused on this problem area. So we, for the foreseeable future, expect to just do more and more in terms of getting teams visibility and insights and, ultimately, optimization and automation tools and workflows.

We want to be the absolute best in this area for teams running Kubernetes and cloud native tooling. And we're going to continue to grow the team. We're out actively hiring across a number of positions, full stack engineer, solution engineer, and others. So we just want to keep doubling down and help more and more teams here in this problem space.

We've got more than 1,000 active deployments of the product today. And we want to continue to see that number grow and just, yeah, help more and more teams as time goes on.

ADAM GLICK: Finally, I know that you're an avid biker. Do you have a favorite ride that you'd recommend to people that everyone should do at least once in their life?

WEBB BROWN: A couple months ago, I did one of the most magical rides I can imagine. I had my first trip to Glacier National Park, where we biked up Going-to-the-Sun Road actually at a time where it was closed to all car traffic. And I would say, if you ever get the opportunity to do that, it was one of the most amazing experiences I've ever had.

CRAIG BOX: Sorry, that's actually the name of the road, Going-to-the-Sun Road?

WEBB BROWN: It is, yes.

CRAIG BOX: Wow.

WEBB BROWN: And it is a pretty good, little climb. It definitely had me pushing it pretty hard at the end, but gosh, just the scenery, everything about Glacier, from what I saw, was just absolutely incredible.

CRAIG BOX: For people that are unable to travel at the moment, do you have any VR equivalent recommendations?

ADAM GLICK: Is there a Peloton option for that?

WEBB BROWN: Yeah, I was going to say, I have a lot of friends that have gotten into riding their Pelotons often. I, unfortunately, am less familiar with VR options there. But from what I hear, I'm hearing great reviews on Peloton options.

CRAIG BOX: All right, well, Webb, thank you very much for joining us today.

WEBB BROWN: Guys, thank you so much for having me. Really enjoyed it.

CRAIG BOX: You can find Webb Brown on Twitter at @webb_brown, with two B's before the underscore and one after it, and you can find Kubecost on the web at Kubecost.com.

[MUSIC PLAYING]

CRAIG BOX: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter at @kubernetespod, or reach us by email at kubernetespodcast@google.com.

ADAM GLICK: You can also reach us at our website, kubernetespodcast.com, where you'll find transcripts and show notes, as well as links to subscribe. Until next time, take care.

CRAIG BOX: Please go and vote!

[MUSIC PLAYING]