#76 October 23, 2019

Pulumi, with Joe Duffy

Hosts: Craig Box, Adam Glick

Joe Duffy is the founder and CEO of Pulumi, an open-source cloud development platform. He joins Adam and Craig to explain why a general purpose programming language is a better tool for cloud infrastructure than a domain-specific language (or YAML), and how you can use Pulumi to provision cloud infrastructure and Kubernetes resources alike.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.

ADAM GLICK: And I'm Adam Glick.

[MUSIC PLAYING]

ADAM GLICK: How's life in the future, Craig?

CRAIG BOX: I had to give up a Friday. I left San Francisco on a Thursday night, and I arrived in Oakland on a Saturday morning.

ADAM GLICK: The dangers of the dateline?

CRAIG BOX: Yeah. I will get two Mondays in return. I hope Friday was enjoyable, but I have made up for it with a great day so far at DevOpsDays Auckland. I gave a chat about extending the service mesh past the cluster and adding VMs to your Istio environment. And I met a bunch of great podcast listeners. I was just sitting in the audience, having a chat with a friend, and then a guy turns around from the row in front and says, "It's like listening to the podcast live." I said, "That's fantastic, but I'm sure you must listen to it faster than one speed."

ADAM GLICK: [LAUGHS] It's funny, I ran into someone this week who had a similar comment to me that said that-- I needed to pick up something from their desk, and they said, "No problem, stop by. But to authenticate you, you'll have to speak at 1.2 speed because that's what I normally listen to you at."

CRAIG BOX: Yeah, I listen to most of my podcasts at almost 2x, but it's not a competition. I've almost run out of stickers now, though.

ADAM GLICK: Oh, well, I'll have to print up some new ones.

CRAIG BOX: Yeah. Tell me, though. Are 64-bit applications the future?

ADAM GLICK: Well, they may be the future, but not all of them are going to work there, as I've taken a sad trip to Catalina this past week. I updated one of my machines and found out that there are 32-bit apps that are not compatible. It doesn't always warn you about those, and I've been chatting with some of the companies that make those. So if you have got your Mac and got the notification about the new version, just realize you do want to check those apps, as some of these manufacturers may decide not to release 64-bit versions or want you to buy upgrades to their new ones.

CRAIG BOX: May the future be full of interesting times.

ADAM GLICK: Let's get to the news.

[MUSIC PLAYING]

CRAIG BOX: Microsoft announced the distributed application runtime or Dapr, a set of building blocks for microservices that operate as a sidecar to your application. Dapr is designed to help Enterprise app developers not have to worry about all the concerns of a distributed system, but without the downsides of having to have language specific libraries, which lead everyone to using sidecars in the first place.

Building blocks included in the alpha include service invocation, state management, Pub/Sub messaging, event driven resource bindings, virtual actors, and distributed tracing. Microsoft also announced the open application model developed with Alibaba Cloud, which is a packaging similar to the Kubernetes application CRD. The Kubernetes runtime for it is called Rudr, because a ship without a rudder's like a ship without a rudder.

ADAM GLICK: IBM/Red Hat have announced OpenShift 4.2. This release brings new features, including a developer web console and ID connectors for Eclipse, JetBrains, and VS Code. The release also provides a deployment extension for Azure DevOps, the product formerly known as Team Foundation Server, code ready containers for developing on laptops and the GA of OpenShift for Google Cloud Platform, Azure and OpenStack. Red Hat also mentioned a migration tool for OpenShift 3 customers, but said that it will be released sometime in the future.

CRAIG BOX: Fairwinds has released a new open source tool designed to help you better tune your vertical pod autoscaler. Named Goldilocks, the new tool provides a dashboard for each namespace and by collecting data, provides you with visual recommendations that they claim are, quote, "just right" based on CPU and memory usage. The tool also conveniently displays the YAML changes which would be required to add to your config files.

ADAM GLICK: Surprising support came from three Twitter users, all who have bears as their user image.

Canonical announced the release of Ubuntu 19.10 with a focus on their MicroK8s distribution. Using MicroK8s, Ubuntu makes it easy to run at the edge with Raspberry Pi devices, as well as in the cloud and on IoT deployments. The announcement also called out easy integration with other open source projects, such as Kubeflow, Istio, Knative, CoreDNS, Prometheus, and Jaeger. Of course, there's a bunch of non-Kubernetes improvements in hardware support file systems, GNOME desktop environment, and more. If you'd like to hear more from Canonical CEO Mark Shuttleworth on their work on OSS and Kubernetes, check out Episode 60.

CRAIG BOX: Scytale has announced the release of SPIRE 0.8.2. The SPIFFE project and its SPIRE runtime provide identity for service-to-service authentication. And this release helps customers that are beginning to deploy and operate SPIRE at scale. Contributions come from companies including HashiCorp, Uber, and Yahoo Japan. And the release brings both performance improvements and simplifications to configuration administration. You can learn more about SPIFFE and SPIRE in Episode 45 with Andrew Jessup from Scytale.

ADAM GLICK: Pablo Moncada Isla from Spanish telco MASMOVIL Group reports impressive CPU and memory improvements in recent Istio versions. MASMOVIL runs 130 node cluster with more than 1,800 pods and 500 services across 180 namespaces. He said that by adopting the sidecar configuration object in Istio 1.2, which reduced the programming burden on the Envoy proxies, they reduced Istio CPU usage by 90% and memory usage by 80%. Upgrading to Istio 1.3 further reduced pilot CPU usage by another 90%.

CRAIG BOX: Palo Alto Network's Unit42 Security Group announced this week the first known case of Graboid, a worm that has infected at least 200 unsecured Docker hosts and used them for Monero digital currency mining. Unit42 worked with Docker to remove the offending images that the worm was using to access unsecured Docker CE instances on the internet. The worm has the Docker engine download a number of other containers that stop crypto mining and then look for the next host to move on to. The code is located in containers, and most security scanners don't check inside the containers being installed. So it had generally gone undetected. Images used in this attack had been downloaded over 6,500 times.

ADAM GLICK: Also from Palo Alto Networks this week, an analysis of two recently patched Kubernetes vulnerabilities. In CVE-2019-16276, the Go language HTTP interpreter incorrectly handles whitespace in headers, which could allow someone to send a fake header and claim to be another authenticated user, such as the admin. CVE-2019-11253, also called the Billion Laughs vulnerability, was patched earlier in Kubernetes, was actually an exploit in the YAML parser. The YAML parser now has also been fixed. Some additional reminders of why it's a good idea to keep your Kubernetes clusters patched and up to date.

CRAIG BOX: The CNCF posted a blog this week highlighting the improvements in Harbor 1.9, which was released back in September. Harbor is a Container Registry, and the post dives into four new features, webhooks, quota, tag retention, and CVE exception policies.

ADAM GLICK: The CNCF has also released the first schedules for their upcoming forum series. The schedules for the Seoul, Korea and Sydney, Australia forums on December 10 and 13 have been released. Additionally, the Kubernetes forum in Bangalore will be from February 17 through the 18, followed by Delhi from February 20 through the 21st. If you're interested in speaking at these events, the CFP is open until November 1.

CRAIG BOX: Cruise has been writing up their experience building a platform on top of Google Kubernetes engine to power their self-driving cars. In their third post, they talk about how they require a multi-cloud and hybrid deployment, with parts of their infrastructure being public and parts being private and how they have architected Kubernetes to take advantage of both public and private cloud resources working together.

ADAM GLICK: Finally, Sugarkube with a K has put up a post espousing the benefits of cattle clusters. Building off the idea of having your containers be ephemeral, i.e., cattle and not pets, Sugarkube argues that your clusters should be ephemeral as well. Conveniently, Sugarkube has created a tool for creating just this kind of environment, and they've released it as open source. If the idea of making sure that both your infrastructure and applications should be easy to stand up and very repeatable, then this project may be what you're looking for.

CRAIG BOX: Surprising support came from three Twitter users who all have cows as their user image.

And that's the news.

[MUSIC PLAYING]

CRAIG BOX: Joe Duffy is the founder and CEO of Pulumi, an open source cloud development platform. Welcome to the show, Joe.

JOE DUFFY: Great to be here.

ADAM GLICK: You started your career working on developer tools over at Microsoft. What was that like? And I hear you may have run into some of the Kubernetes community way back in that day as well?

JOE DUFFY: Yeah, so I started pretty early on the .NET project. I was actually working on the guts of .NET, working on the virtual machine, the Common Language Runtime, mostly working on concurrency primitives and trying to make parallelism possible for humans to program, which turns out to be quite difficult, and actually has some parallels with distributed programming.

But yeah, back then, working on some of the new .NET technologies, I actually worked with Craig McLuckie and Joe Beda back then. Craig was on the WCF team which was working on the communication primitives for message passing. Joe was working on the presentation framework, WPF, codenamed Avalon back then. So it's funny running into them many years later when they were at Google and reconnecting, and small world.

CRAIG BOX: It does seem like everyone in Seattle has to do a stint at each of the three main tech companies based there.

JOE DUFFY: Yeah, it is very common, although I have never worked at Google, nor have I worked at Amazon or Facebook, so maybe I'm an outlier in that respect.

ADAM GLICK: Acquisitions may happen. You never know.

JOE DUFFY: Yeah, I decided to start a company. I've always been into building businesses, building teams, building technologies. And so starting a company for me was the right next step, but you never know what happens next.

CRAIG BOX: How does the experience of desktop tooling differ from what people were doing in cloud a few years back?

JOE DUFFY: I think, actually, there are a lot of similarities in terms of the journey from-- you look at in the early days, people programming with assembly language, moving to higher level programming frameworks, codifying best practices. Then you have package managers, so you can reuse things. And I look at the transition in the 2000s to more concurrent programming, thanks to the multicore changes that were happening, and that sort of forced a lot of people to realize, hey, actually, even desktop programs themselves are almost like little distributed programs where you're doing multi-threading and message passing, which is a much better model than shared state and locks and race conditions and all the things you run into when you're trying to do kind of single process programming.

And so I look at the shift that we're undergoing in the cloud, and I see microservices. A lot of this is really about figuring out what the right level of isolation is, the right granularity for these different pieces of functionality, and then how do they communicate with one another, and how does a developer approach building that system, often with a team working on different pieces.

So I actually see a lot more similarities than differences. And I think that really inspired the way we approach things with Pulumi. And I think we took a little bit of a contrarian approach in that respect, but definitely, as people continue along this journey of adopting distributed programs, moving to Kubernetes, I see people sort of realizing the similarities more and more.

CRAIG BOX: When you have a split when you have developers on one side of a fence and operators on the other, it kind of feels like cloud infrastructure and the configuration for it came from the operator side in that it's all config files and now, obviously, YAML. Breaking down that wall lets people who are familiar with all the tooling from the left hand half use that in a cloud environment. Why do you think that it took so long for that to be a thing that people did?

JOE DUFFY: I think honestly, it's sort of a natural transformation over-- you look at 10 years ago. Actually, the DevOpsDays conference that got started 10 years ago, there's a 10 year anniversary in Belgium coming up that we're going to be at that really kicks off and celebrates the 10 year anniversary of the term DevOps. And even back then 10 years ago, people knew that these walls needed to be broken down. And a lot of that was inspired by sort of more agile programming techniques.

But I think even desktop programs, you look at-- there's always been the administrators that have to install the software, have to patch the software, have to basically manage it. And this is more in the enterprise space than in consumer. But this split already did exist even before the cloud.

And then I think if you look at how we got into the cloud, it started with virtual machines where developers basically didn't have to think about the cloud as a first class thing. They built an end tier application-- the app tier, the web tier, and the database tier. They often had DBAs who kind of managed the database for them. And then they sort of threw it over the wall, and the ops team picked it up and made it work. They said, how many VMs do you need? OK, I need three. Great, statically provision them. And then it sits there for five years, and the only time you have to interact with the ops team is when something goes wrong. Or you need to change the capacity or what.

But you fast forward to now and just the pace of change and the sheer number-- AWS has 250 different services that you might want to use. We have Kubernetes, container, serverless. The landscape has really changed, and so having that wall between the two sides of the house really slows people down. And you look at the most innovative organizations, Airbnb or Spotify, Netflix, some of the companies you think of when you think really innovative cloud technologies, they don't have that split.

Of course, there's specialization that happens. Somebody has got to know the best way to run a network or to administer a Kubernetes cluster, but that doesn't imply we have to speak different languages and be on separate islands, interacting with each other through ticketing systems. We really see that these modern organizations are really collaborating much more closely.

ADAM GLICK: What is Pulumi, for those that aren't familiar with it?

JOE DUFFY: Pulumi is a modern infrastructure-as-code tool, and we say that because it's multi-cloud. And that's not to say we're trying to paper over differences in the clouds. We embrace what makes all the clouds great. But it also supports Kubernetes. But we approach it different than most infrastructure as code tools. We saw that basically what we wanted to do is empower every developer and every operator to use the full power of what the cloud has to offer. And we want to allow them to work better together.

And so we took a few fundamentally different bets at the outset. First of all, it's all open source because developer tools and operator tools just have to be open source, and so it is open source. But more than that, we use general purpose programming languages to do things that typically people thought you needed DSLs or YAML templating for. So we figured out how to basically take general purpose languages and marry that concept with the idea of infrastructure as code, which basically unlocks access to ecosystems of tools.

You can use your favorite editor. You can use package managers. You can use test frameworks. You can actually test your infrastructure. You can build bigger abstractions out of smaller ones, so you don't always have to copy and paste the same 1,000 lines of YAML over and over again. You can benefit from functions and classes. And all the things we know and love about application development now applies to infrastructure. And what we're seeing is that it really helps the two sides of the house work much better together.

ADAM GLICK: This sounds a lot like other tools that I think people may be familiar with, kind of the Salt, Ansible, Puppet, Terraform, Chef, or even ones by particular platforms, basically scripting languages for operations folks to kind of script out. And that's typically what people have called infrastructure as code. How does this relate to that? How is it different?

JOE DUFFY: Definitely a lot of similarities. And for sure, we were inspired by a lot of the prior tech that is common. I think if I break it down, there's really, like, the first wave of infrastructure as code tools was configuration management tools. So Chef, Puppet, Ansible, Salt-- the idea with those originally was, hey, we're going to provision virtual machines, and we need to configure the stuff that's running on the virtual machines, right? So it's less about the provisioning. Typically, you'd use vSphere or some other control plane for provisioning.

And these tools were really like, what happens when the virtual machine comes up? Well, we want to go install some software. What happens when a new patch comes out? Well, these can help with patch management. So that's sort of the first wave, and it's certainly, we actually integrate with those. If somebody is already using Chef and Puppet to configure VMs, we're happy to work with them. But what we're seeing is a lot of people are moving more towards provisioning, this notion of immutable infrastructure, where, instead of going and patching a server, just recreate the server with a new blueprint, whether it's based on a Docker file or even just spinning up a blue-green new copy of the server.

And furthermore, when you're moving to manage cloud services, Amazon, for example, is managing control plane upgrades and patches for you. You don't have to really think as much about it. Right? If you're using EKS, for example, you just say, hey, Amazon, I want 1.14, and the server's control planes at least are kind of isolated from you. So for those, provisioning tools are more popular. And so Terraform is one example. AWS has CloudFormation. Azure has Resource Manager templates.

So Pulumi is more in that vein, where it's doing provisioning. You're declaring the infrastructure you want, and then Pulumi is figuring out how to go out and create, read, update, or delete the infrastructure, based on the notion of a goal state, which actually is very much like how Kubernetes configuration works. Right. Kubernetes config, you basically describe to Kubernetes, using YAML, the set of objects you want to exist. And then Kubernetes controller goes off and makes sure that the objects exist. They're in the desired state. If something goes wrong, it will rectify the situation by recreating objects as necessary. And that's kind of what Pulumi does, but Pulumi works for the Kubernetes object model, in addition to AWS, Azure, Google Cloud, CloudFlare, Digital Ocean, a lot of other cloud, SaaS's or alternative cloud providers.

CRAIG BOX: If I'm coming along in my programming language of choice, am I saying I want there to be four VMs, or am I giving an imperative instruction, which says programming environment.create VMs(4)?

JOE DUFFY: It's the first. I'm glad you brought that up, because that's a critical distinction. Today, you've got the Cloud SDKs. You can go open code and create servers using general purpose languages. But if you're doing that in a script in a sort of ad hoc way, that's problematic. If there's a failure, how do you recover from failure? If somebody Ctrl+C's the script in the middle, what happens? What if you want to roll back to a prior state? Usually, people that do scripting, they have to think about basically N by M upgrades. Every state in the upgrade process, they have to open code.

And instead, with Pulumi, any other provisioning tools I mentioned like Terraform and Kubernetes config, you basically, in the code, you say-- either using a for loop, or you can just say, hey, server 1, server 2, server 3. And Pulumi understands the objects and how they relate. So it captures the dependency graph so it knows what order to delete things in and can give you nice views of graphs of how one resource relates to the other. And then it understands how to diff that with the current state and then basically drive to resolve the current state, such that it basically matches the desired state. And it does that through orchestrating a series of CRUD operations, and it paralyzes it so that it happens as efficiently as possible.

CRAIG BOX: Does this only run when I ask it to, or do I operate it as a controller where I can have it continually operating them?

JOE DUFFY: We have some people that run it as a controller, which you can do, but you also can run it sort of on an ad hoc basis. So you can just run it on your developer desktop. Most of our customers actually do it in response to check-in. They're checking in the configuration code, and then we have integrations with a lot of CI/CD providers, like Travis or Jenkins or Code Fresh, GitLab. And so we're trying to basically integrate into existing workflows a lot of people are using, especially GitLab for their existing infrastructure and Kubernetes deployments.

ADAM GLICK: You've talked a little bit about it being declarative as opposed to imperative infrastructure and how that's a big change and something that's probably familiar to those of us in the Kubernetes world, but not necessarily people who come from the world of run books and traditional infrastructure. One of the other differences you have, though, is that you mentioned DSLs or Domain-Specific Languages, and you mentioned a number of them. You don't work off of Domain-Specific Languages, though. You work off of languages that developers may be more familiar with, correct?

JOE DUFFY: That's right.

ADAM GLICK: What are some of those languages, and what's the benefit to people using them?

JOE DUFFY: We're trying to meet people where they are. That's kind of one of the advantages of the approach we took, is you basically gain access to existing ecosystems. And so if you're using JavaScript or Typescript, you can use NPM packages. If you know how to use the language and use the async model, you can use that. I mean, you basically have full access to the entire programming language. And then we have the engine, which is a multi-language engine that's shared amongst all the languages. So our engine is actually open source. It's written in Go. It knows how to basically host these language run times and then monitor what's going on in the runtime so they can do the CRUD operations that I was mentioning.

When we started the company, we actually started by creating new language. It turns out Luke, who was our founding engineer-- he was our CTO-- he wrote the first typescript compiler at Microsoft. So he actually had his hand at starting that project. I was on the C# design team way back in the early days, working on things like Link. So we definitely could have created a new language. And we went down that path, and we realized it turns out that we don't need a new language to solve this problem. And we can actually solve it with Python and typescript and C# and Go even.

And so we said the world doesn't need another general purpose language. Maybe there's some exciting projects like Dark that just came out a few weeks ago. And I think the world definitely could benefit from some new languages, but we didn't feel it was necessary to solve this set of problems. And DSLs, I've worked on a lot of DSLs over the years, and my experience is every DSL always grows up to be kind of an approximation of a general purpose programming language, but without the sort of foresight and design of being a general purpose language. So you end up having to wedge in concepts like for loops or functions or abstractions. And it starts to feel clunky over time.

And so we saw a few years in the future and said, hey, the world needs general purpose languages here. We need abstraction. We need classes and functions and for loops, and we need package managers so we can share best practices rather than all of us continually reinventing the same patterns without a good way of sharing those patterns.

ADAM GLICK: It does seem like a lot of them follow this traditional lifecycle model of going through it, and you've got some scripting, and then is it Turing complete, and then do you have tooling around it, and what about the various additional libraries. And so you've essentially worked around that by working with languages that people are already familiar with. What are the languages that are supported?

JOE DUFFY: Right now, we support any Node.js language, so JavaScript. I would say Typescript is one of our most popular because you get all the benefits of Node.js but you also get strong typing. The editor will give you squiggles if you mistype something. You get interactive sort of documentation if you're hovering over things. So, like, if you were typing new Kubernetes pod, you're going to get the documentation. It's going to tell you if you mistyped the property name, things like that.

ADAM GLICK: Intellisense for JavaScript, so to speak.

JOE DUFFY: Exactly. Sort of Intellisense for your Kubernetes config, which is super useful-- statement completion, refactoring. It's kind of magical. Python, so we actually find a lot of people with operations backgrounds will gravitate towards Python. A lot of developers, too, in Go, which we actually have some folks that are embedding Go into larger projects, so Cockroach Labs just announced that they're doing Cockroach DB hosted on Kubernetes, and they're actually using our Go SDK sort of in the background to do that. So the Go SDK is designed to be embeddable in larger systems. And we're actually working on .NET support right now. That will bring C#, F#, basically any .NET language.

CRAIG BOX: I was definitely going to ask about that because you're talking about the languages of sysadmins, yet you were working in the environment of developers. And I thought that given your background, that .NET seemed quite an obvious omission from that list.

JOE DUFFY: Yeah, it's interesting. The way we talk about as right now, we're kind of focusing on a lot of people that already are doing infrastructure as code. I think what we're starting with is solving problems that a lot of people already have today, people that are struggling with existing technologies. So it's a lot of infrastructure engineers and developers who are increasingly doing infrastructure that we're currently targeting. Over time, really broadening to more and more developers. The goal is to really make infrastructure in the cloud accessible to all developers. And so over time, we'll definitely be investing in more of the sort of application level languages, if you will, like .NET, like Java. So that's going to be an increasing focus for us going forward.

ADAM GLICK: Java was the other one that seemed kind of a really powerful and common language that I didn't see there. Do you plan on working the other direction as well? So a lot of, you know-- you have some of the newer languages there, but things I think of like Ruby, or PHP.

JOE DUFFY: Yeah, it turns out one of our most upvoted GitHub issues is Ruby support. The reason why we didn't come out initially with a lot of languages was we wanted to get the platform to a steady state. And we just finished our 1.0 release about a month ago. And that was really all about getting it to a stable point.

There's also, we have a multi-language runtime, and it was a kind of tricky thing to implement. And we wanted to make sure we had sort of the abstraction and the boundaries correct between the responsibilities of the engine versus the language runtime itself before we started going and stamping out lots of languages. Now we're pretty confident in the foundation, and so we're going to go definitely investing in a lot more of the languages. Definitely Ruby is on the shortlist for sure.

CRAIG BOX: A lot of similar tools came from a space of trying to abstract away the differences between clouds and, say, create a VM. And today it might be on GC. Tomorrow, it might be on Azure. Does Pulumi take that approach, or is it explicitly creating infrastructure for a particular vendor when you go to ask for it?

JOE DUFFY: It's explicitly creating for a specific vendor. That's kind of the base case. So we have packages for all the different clouds. And so when you take a package and new up something, you know you're getting a GKE cluster with this specific node pool configuration and these OAuth scopes, and you're programming at that level when you want to. And so we don't try to get in your way.

And that's really important because although a lot of these other tools that sort of abstract over the differences make for good demos and usually a good getting started experience, we wanted something that was production ready. We wanted something where infrastructure teams with very rigorous requirements could come and be successful using this. And for that, it was clear talking to end users that we just couldn't abstract away those differences.

That said, because it's a programming language, you can introduce abstraction. And we have some packages of our own that make it easier. In AWS, for example, everybody has to configure a VPC, a Virtual Private Cloud. This is the way you get public private subnets and configure basically some network level isolation. Well, it turns out the best practice today is you take 2,000 lines of YAML and copy and paste it from a website and read a 20-page white paper on security best practices. And then good luck. It's up to you to make sure that works.

Whereas what we did is we took that and we distilled that down into a single package so that you can say instead, hey, give me a VPC. I want the default configuration options. And we give you knobs to go in and tweak it. And we also have other abstractions that are cross cloud even. So the one I just talked about was specific to AWS. We actually have some that are cross cloud. For example, if you want to run a serverless function every hour or on a Cron schedule, we have a package that can implement that on a bunch of different clouds.

And some customers even implement their own abstractions. Maybe they care about just running this one particular application and making it multi-cloud friendly. And oftentimes, that involves Kubernetes in some manner. Joe and Craig sometimes call Kubernetes a Goldilocks abstraction. It's just right at different levels of abstraction. And that's sort of the same aesthetic that we were aspiring to.

ADAM GLICK: This sounds like an incredible project for something to build. What made you decide that it was a company, rather than a project?

JOE DUFFY: We definitely wanted to build a business, first and foremost. But given our backgrounds, me and my co-founder Eric, we've worked on a lot of cool technologies throughout the years. And there's no question when I was looking at what I was going to do next, I could have done something fun and exciting at Microsoft. I could've gone to Google and worked on some great technologies.

But what I was really looking for was, hey, more connection to the business. Working at a big company, oftentimes, you're not as close to the end user. You're not as close to what's actually making money, what isn't making money. When I told people I was going to go start a company and it's going to have anything to do with DevTools, they kind of look at me cross-eyed and say, you must be nuts. Developers don't pay for anything, right?

And although it's true, enterprises pay for things. IT organizations pay for things, right? And so our bet was it's going to be open source. I think the cool thing about it-- the best part of the company for me and the project has been people get evangelical about it. When people use it, they love it. They tell their friends. Almost all of our growth has been word of mouth. We don't spend really anything on marketing right now. And that's been wonderful to see.

And so the bet was, hey, if people love it genuinely because it really does make their lives better, they're going to tell their friends. All the developers in the world will fall in love with this thing. And then we'll sell it to the teams in the enterprise that want to operationalize this. And there are enough proof points and we talked to enough customers in advance that we were sure there was something there. And we've had to prioritize features like SAML SSO, Active Directory integration, role-based access control. We're launching soon some policy and security extensions to the product.

We definitely had to do some different pieces of work, but I've been happy that most of the commercial success we've seen has tracked pretty closely to the success in the community. And to me, that's just kind of magical. You don't see that happening very often, and it could've gone the other way, right? We might have had to pivot because people weren't paying for it. But it's going really well so far.

ADAM GLICK: What are the common things you see, as, as you said, the developers don't pay for things, the operations teams, when it gets big enough and when it becomes critical enough, they want to manage it, and the operations teams end up paying for it? What's the business model, and what open source license did you choose to use?

JOE DUFFY: There's actually an article that came out just a few weekends ago from Peter Levine at Andreessen Horowitz who invested in GitHub early on. And he talks about the three models of open source business models. The first wave was kind of the Red Hat model, right? So you had some open source project, and you charge for support training, professional services, things like that. The second model was sort of open core, right? Maybe Elastic is a good example, where you open source most of it, but then you hold back some aspects of the project, and they're closed source. You only give them to your Enterprise customers.

Model three, which is what Pulumi-- what we bet on-- and this article wasn't out. I didn't know this was going to be, quote, model 3, so in hindsight, it looks like a good thing. But so we bet on SaaS. What we find is a lot of people and teams today, they're willing to pay for services, right? They use GitLab. They use GitHub. They use Pager Duty. They use Twilio.

SAAS has become sort of a default for a lot of companies, and the nice thing with open source is it doesn't imply this awkward weird split where you're artificially holding some code back, right? Literally, everything is open source, and then it's the service that you end up paying for. And with Pulumi, you don't have to. It's just there for a convenience. It turns out it's the easiest way to use it, so a lot of people do end up using the SaaS, and we actually have a free edition on the SaaS as well.

The nice thing about this is it's really easy when we have to decide what's open source. It's all open source, right? We don't hold things back artificially, which I think builds more trust with the community. It means when somebody shows up with the pull request for this great new feature, we don't have to say, oh, sorry, that competes with our commercial offering. We're not going to merge that thing. No, no, no, we're happy to merge anything that adds value to the community.

And so I think that's the biggest advantage to the current model. We open sourced it under Apache 2, mostly because that's sort of CNCF and Kubernetes. And I worked on a lot of open source at Microsoft. I was actually in charge of open source for a large part of the company for a while. And so I saw all the different licenses. And Apache 2 is just the cleanest, most trustworthy. I know that companies like Google, if they wanted to use it in some way, it was not going to cause friction.

And I know a lot of these pseudo open, but kind of closed licenses are now becoming common. And I just fundamentally don't believe in those. And that's why I'm so happy with the business model we chose because we kind of don't have to worry about that, right? It's all open source. It's all there. It's open. And feel free to use it. SaaS is the way we make our money. It's not by trying to sell that open source project.

CRAIG BOX: A number of other vendors were in that position over the course of the last 12 months and have pulled back a little bit, especially in regard to prominent vendors running versions of their own software as a SaaS. Is that a concern that you have?

JOE DUFFY: It's not because a lot of those companies that did that are open core. And that was the reason they had to change those licenses, because for example, the Amazon Elasticsearch situation where Amazon sort of forked the project and then started re-implementing all of the open core features that Elastic had kept private.

That said, it's definitely a concern, but to me, there's a little bit of a benevolence part of starting the project where that would still be success in my mind. Because that would mean that we've changed the way the industry approaches infrastructure as code and that the vision really did pay off in the end. And so if Amazon, Azure, or Google Cloud are all taking this and using it, I actually think that's success in sort of a different way.

Also, one of our unique things is that we're multi-cloud. We don't pick a favorite. We love all of them. We're partners with all of the clouds. And increasingly, especially folks picking up Kubernetes, they're thinking about multi-cloud for a lot of good reasons. And so we can sort of stay neutral in that equation.

ADAM GLICK: You offer a free community edition that's available as part of your SaaS offering. How does that differ from what people can get from GitHub today?

JOE DUFFY: Basically when you download the open source project, install it via Brew or build it from source or however you get it, when you use it, the way infrastructure as code works, it sort of has a goal state, a desired state, and a current state, and it needs to do some bookkeeping around that. And its needs to know that multiple people aren't deploying at the same time, which might corrupt an environment. So it actually needs to do some state management and concurrency management.

And so by default, when you download the tool, it gives you the option. It says, hey, how would you like to set this up? Option one, which is the preferred option just because it's so easy, is use our free SaaS. And if you do that, it's actually a lot like working with GitHub itself, where the analogy I like to draw is RCLI is kind of like Git, and our SaaS is sort of like GitHub, right? So they go hand-in-hand. Just like when you do a commit in Git, you can go to GitHub and see the history. And you see who checked in, when, what they changed, the full diff. And so we do that for infrastructure.

So when people are making changes to their Kubernetes object configs, or they're scaling things up, or they're adding a new virtual machine, or they're upgrading their Kubernetes EKS cluster or something, all that's being tracked in the SaaS. And so you can do it offline if you don't want to use a SaaS as well. Maybe you're behind a firewall and you can't depend on a third party service, or lots of different reasons. So we have pluggable storage back ends that you could use.

And basically, that's all free. And what you pay for, then, is again similar to GitHub. If you want to use an organization, if you've got a lot of people on your team that want to collaborate on a shared set of projects, that's when you pay, and it's a nominal fee. It's sort of SaaS-y in the pricing model. But you can still use it in your team, the fully offline version, if you want. It is truly open source, so everything's there. You can do what you want with it.

CRAIG BOX: Pulumi was founded in 2016. Back then, was it clear that Kubernetes was going to be the platform that, quote unquote, won?

JOE DUFFY: It was clear to me for a while. In fact, I had a $1 bet with my co-founder. We started in my basement, which was exciting. I always wanted to start a company either in a garage or a basement, so I can check that off the list. He was kind of saying, hey, Docker, it's theirs to lose. Docker Swarm is going to be the thing that emerges. Meso, obviously, had, at that time, a lot of fanfare. It looked like Microsoft might even purchase Mesosphere. But to me, Kubernetes, it really had what I thought were the right level of abstraction, the right set of concepts that seemed like it could really become sort of a kernel for container based compute.

Also, I just have to say, I typically like the Google aesthetic in terms of developer projects, and I think the fact that they're research papers. I worked on a research project for a long time, so I've read lots of Google research papers. And I just love that cultural component and the fact that there actually are papers about Borg architecture that pre-date Kubernetes. And there's clearly a lot of very intelligent work that has gone into the Kubernetes platform. And for that reason, I also had confidence that in the long run, it would stand the test of time.

ADAM GLICK: Well, if you're ever curious, we're always hiring. What comes next for Pulumi? You mentioned that you're going to be supporting a number of additional languages as you move forward. What else are you looking to add to the project?

JOE DUFFY: We're doing really well with infrastructure teams right now. If you look at the Kubernetes adoption that we have, a lot of it is people that have to spin up clusters, they have to configure the security and the networking for the clusters. And a cluster isn't an island. It connects to other things. And Pulumi makes that a lot easier. I think the thing as we go forward-- I alluded to this earlier-- we're really trying to empower more and more of the engineers in the world. And increasingly, that's developers.

I think of the number of people that are deep infrastructure experts versus the number of developers in the world, which is 20 million plus people. There's a lot of opportunity to help those people be more self-serve. Help them use containers. Help them use the best of what the cloud has to offer. And so even at KubeCon this year, we're going to launch a few exciting projects that really, I think, will change the way people think about the split between inner loop development and infrastructure development.

And over time, over the next year, especially, the additional languages will help, but really, raising that level of abstraction to solve for some of these common problems. So that's just less toil, less daunting to get started, and more enjoyable, easier to build just really powerful software using the cloud. I think that's the thing that we're really going to focus on for the coming year and frankly, for many years to come after that.

CRAIG BOX: And finally, what exactly is a pulumi? What does the name mean?

JOE DUFFY: Pulumi is the Hawaiian word for "broom," which has a somewhat unfortunate, but very personal story. So my good friend, Chris Brumme, who was actually an early engineer on the CLR team, he was at Google working on machine learning until recently. But he was advising on the company, and then he unfortunately passed away very early in creating the company. And his last name is "broom". It turns out it's actually "Brum", so it's kind of a little bit of an inside joke, too. People always mispronounced his name.

But I did get his clearing that that was an OK thing to do before he passed. So, unfortunate story, but it's a good way to kind of honor-- he kind of inspired us. He actually told Eric and I that we had to start this company, and he kind of shamed us into it and said, you gotta go do this thing. And so it's our way of kind of honoring him.

CRAIG BOX: That's a lovely story.

ADAM GLICK: Joe, thank you very much for coming on the show. It was great to talk to you today

JOE DUFFY: Yeah, thank you. It was great.

CRAIG BOX: You can find Joe on Twitter, @funcofjoe, or on the web at joeduffyblog.com. You can find Pulumi at pulumi.com.

[MUSIC PLAYING]

CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. Or tell us if you see us at a conference! If you have any feedback, you can find us on Twitter, @kubernetespod, or reach us by email at kubernetespodcast@google.com.

ADAM GLICK: You can also check out our website at kubernetespodcast.com, where you can find transcripts and show notes. Until next time, take care.

CRAIG BOX: See you next week.

[MUSIC PLAYING]