#179 May 11, 2022

Docker, with Scott Johnston

Hosts: Craig Box

Docker CEO Scott Johnston joins us to talk about the announcements from this week’s DockerCon, the transition from an enterprise to a developer tools company, and the Internet’s favourite whale.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Craig Box.


CRAIG BOX: By the time you hear this, I'll be on a plane to Spain, hoping it won't rain. It's certainly rained a lot here in the last hour, and given how many insects it brought out of the woodwork, I think it might be the right time to be moving on. It's certainly a much longer trip to KubeCon from New Zealand than I'm used to.

You would get change from 2 and 1/2 hours from London. It's taking me well over 10 times that. Perhaps one day you can all come down for KubeCon Aotearoa and enjoy the experience yourself. In fact, if you dig a hole from where I'm set right now, "Total Recall" reboot style, you'll end up in Cordoba, which is only about 500 kilometers from Valencia.

I might like to put down two pieces of bread and make the fabled earth sandwich. Valencia ends up just off the coast of New Zealand unfortunately. It turns out that there aren't that many places in the world where you have a population on exactly the other side.

There's a lot of ocean. Even the giant that is Australia is mostly opposed by ocean. There's a map in the show notes if you want to check out where your antipode is. Next week, we will bring you up to date with what's happening at KubeCon.

If you want to know where you can find me, please follow me on Twitter, @craigbox, and @KubernetesPod. In fact, you should just go and do that now, so that if I go to a gig again, you'll be the first to see a video from it. Anyway, in case you're looking for something to listen to while you're on a long plane flight, we've just launched a new Google Cloud podcast site. Check it out to see what's going on in the non-Kubernetes parts of GCP.

We even have a podcast with audio versions of our blog posts. And only we could get away with calling that podcast "Google Cloud Reader." Let's get to the news.


CRAIG BOX: Docker has announced Docker Extensions and Docker Desktop for Linux at this week's DockerCon. Sounds a little weird when you say it that many times in a row, doesn't it? Docker, Docker, Docker. Docker Extensions allows developers to discover and add complementary development tools to Docker Desktop. Docker Desktop for Linux brings the desktop user experience enjoyed by Mac and Windows users to Linux workstations. Listen on to learn more.

Google Cloud announced the general availability of Spot VMs on GCE and GKE. Spot VMs are the next generation of preemptible VMs, with capabilities including runtimes longer than 24 hours, lower preemption rates, and deletion upon termination. You can also run Spot Pods on GKE Autopilot, where you can run fault-tolerant workloads at lower costs. Preemptible VMs also continue to be supported, now with the same pricing model as Spot.

Linkerd creators, Buoyant, have announced the introduction of Managed Linkerd to their cloud product. They promise to handle control and data plane upgrades from within Buoyant Cloud. And the new release will include expanded monitoring capabilities. The feature is currently in private beta.

A special offer to you, dear listener, from our friends at the Linux Foundation. CDCon, the event for continuous delivery, is being held in Austin, Texas on June 7th to 8th. If you want to attend and hear from no fewer than 18 keynote speakers — will there even be time for regular sessions? — there's a code in the show notes for 40% off your ticket price. You can, of course, also attend the event virtually.

Amazon has launched a Kubernetes resource view for EKS clusters. To date, their console has only supported workload resources like deployments, jobs, and daemonsets. The new release adds all standard Kubernetes API resource types, with structured and raw JSON views. You can also use this feature with clusters attached to the AWS console.

Finally, Pulumi, an engine created to allow you to use programming languages so you don't have to use YAML, launched support for YAML. That sentence gave me a headache, and I only chose to mention it because of a blog post by Pulumi's Lee Briggs. Given the observation that YAML is JSON, and lots of programming languages have libraries for generating JSON, he gives examples of templating, going through Perl, and finally landing on Fortran, challenging the reader to go even more obscure. And that's the news.


CRAIG BOX: Scott Johnston is the CEO of Docker. Welcome to the show, Scott.

SCOTT JOHNSTON: Craig, great to be here. Thanks for the invite.

CRAIG BOX: Kubernetes and Docker have a long and intertwined history. Are we allowed to be friends now?

SCOTT JOHNSTON: Not just friends, best friends. Docker and Kubernetes have been on this journey for years, since 2014. And Kubernetes is one of the most popular targets for applications that are built in Docker Desktop. Moreover, Docker Desktop ships with a full stack of Kubernetes there for devs when they want to do that inner loop, rapid iteration of refining their application, refining the functionality. So no, not just friends, best friends.

CRAIG BOX: Aw, thank you.

SCOTT JOHNSTON: Absolutely. Thank you.

CRAIG BOX: Of course, Kubernetes was famously announced at DockerCon, I want to say eight years ago this year. Is that right?

SCOTT JOHNSTON: The very first DockerCon, June 2014, no other than Eric Brewer announced it on stage live at DockerCon 2014.

CRAIG BOX: It's fair to say that your company has evolved quite a bit since then. Can I call this company Docker 2.0?

SCOTT JOHNSTON: For the "Star Trek" fans, this could be Docker, "The Next Generation." But for the "Star Wars" fans, it could be Docker, "The Return of the Dev." So you tell me which one.

CRAIG BOX: Return of, Revenge of — probably not revenge. Revenge isn't really the word we're looking for.

SCOTT JOHNSTON: Yeah, let's call it Rise of, or Return of, I think.

CRAIG BOX: First of all, let's talk a little bit about your personal background. I tried to calculate. It looked like you have three degrees from Stanford, but you corrected me and said you actually have four. The fourth one looked a bit more like your cat was walking across the keyboard, so I wasn't familiar with that combination of letters.

SCOTT JOHNSTON: Yeah. I'm not sure who's counting. It was more like, they made a mistake letting me in, so I better get some degrees while the getting was good before I got out.

CRAIG BOX: Were no other universities available? They didn't let you off the campus?

SCOTT JOHNSTON: No, it was more I couldn't find the door. What's the Joe Walsh song? "It's hard to leave when you can't find the door," so I couldn't find the door.

CRAIG BOX: I have been there. It's a nice campus, pretty open plan, I guess. I don't want to challenge you here, but you could find your way out if you needed.

SCOTT JOHNSTON: Well said. Maybe. It's a long walk off campus.

CRAIG BOX: It is. Perhaps the public transport wasn't what it is these days.

SCOTT JOHNSTON: There you go.

CRAIG BOX: And I don't mean to make you feel old, but you actually interned at Sun and then worked at Netscape. Do you have any fun stories from those days?

SCOTT JOHNSTON: A lot of interesting things at Netscape at the time. Look, it was mid to late '90s. There were possibilities everywhere as far as the web. Everything looked exciting and new. Within Netscape, of course, there's this really interesting conversation going on around Java and then what we'd call JavaScript.

But aside from the name, nothing, nothing is related between the two. JavaScript was, of course, intended to be a very lightweight scripting language that anyone could use to automate and animate web applications, HTML applications. And Java became, over time, a more middle tier programming language, serious programming language for business logic, business applications.

CRAIG BOX: Somewhat famous for being quite heavy, though, Java. Isn't that like calling your programming language "BoulderScript"?

SCOTT JOHNSTON: [LAUGHING] Well said. It didn't start out that way. If you recall, the first Java interface on the server side was Servlets, which is very lightweight, HTTP request, or response to an HTTP request. It was trying to replace CGI, which was an in-process beast.

What happened over time, though, is that the requirements or the perceived need or requirements made Java heavier, and heavier, and heavier on the server side, which is where we ended up with, as you said, J2EE. Didn't start out that way, though. Started out as much lighter weight, web-y, very efficient, back and forth.

CRAIG BOX: It's always felt to me like there's been a sine wave with, say, network computing and Sun, for example. The idea of what we do on our desktop and then moving it all off to the servers, and then bringing it back, and so on. That just sort of repeats itself over time. Is that what you've seen throughout your career?

SCOTT JOHNSTON: Yeah. I guess when you're old you can look at multiple waves. And so first was fascinated by the computers in the '70s, if you can believe it. Fascination was these mainframes that were behind these glass walls.

And what fascinated me about the Apple II and the TRS-80 is that all of a sudden, all that power that was behind that glass is now accessible to humans. So there you go, one sine wave. And then you get into client/server in the late '80s, early '90s, the rise of Sun, as you just said. And then, whoa, web takes off and now you've got big servers on the back end with very lightweight, thin clients in the form of the browser and desktops.

Here we are again with cloud computing, going back and forth between, is it centralized in the cloud? Is power out on the edge or in your car nowadays, right? I see it as fundamentally driven by the underlying physics of our industry. So you have Moore's Law, on the silicon you've got Metcalfe's Law, in the network and something with the storage devices. As all those physical forces move forward, bottlenecks move around.

And so you see this ping pong back and forth, the sinusoidal wave, as you said, of, where do you put the logic? Where do you put the decision? Where do you collect the data and process the data? I think we're just going to continue to see it as our industry evolves and as these forces evolve at different rates.

CRAIG BOX: Have you thought about inventing a Johnston's Law just to fix this?

SCOTT JOHNSTON: What's the joke, the xkcd joke, there's 14 standards on the internet, let's invent a 15th to have one standard on the internet. No, Johnston law would be one more thing.

CRAIG BOX: You can never have too many standards.


CRAIG BOX: Did you know Tom Lyon at Sun?

SCOTT JOHNSTON: Did not know Tom. My misfortune.

CRAIG BOX: I mention that because he tweeted last week that it was his 40th anniversary, and posted his employee card from back then.

SCOTT JOHNSTON: The Twitterverse celebrated that, too.

CRAIG BOX: But it would have been a big company by that point. Can't imagine that's a startup, if that was 1982, if my math is correct.

SCOTT JOHNSTON: Good math. What was interesting is, when Netscape split up, the browser and the website was sold to AOL. Guess where I was back again? I was on the server side. The server side got sold to Sun in the Sun Netscape alliance, which became its own kind of interesting adventure with Sun into server software.

CRAIG BOX: Now you worked for a while at LoudCloud, which then became, effectively, through some set of steps, the Andreessen Horowitz venture capital firm.

SCOTT JOHNSTON: Little more steps to it than that, but that was a really interesting adventure because LoudCloud was serving this Cambrian explosion of startups in the '90s who needed infrastructure quickly to scale at their new startup website, or their new startup application. And LoudCloud was the first PaaS, although we didn't call them PaaS at the time. We called them MSPs.

And LoudCloud would stamp out in a very automated fashion, before Puppet, Chef, Ansible, Salt, LoudCloud had automated stamp out of Apache Web Server on the front end, BEA WebLogic as the middle tier, Oracle database as the back end. And LoudCloud knew how to stamp those things out and scale them dynamically, horizontally.

Really good when the startup market was ascendant, and VC money was flowing, and startups were getting funded, and growing, and scaling like crazy. Not so good when the market rolled over. That led to the divestiture of the services business of LoudCloud and the scaling down to what became Opsware, a software company focused on data center automation, which, not to jump ahead too much, but that was my first restructuring experience which gave me some breadcrumbs to think about as we were considering what to do with Docker in 2019.

CRAIG BOX: It's a long career and a short podcast, so we probably don't want to go too—


CRAIG BOX: I apologize. Again, I'm not trying to make out that you're old here. I very much enjoy hearing the stories of back in the day.


CRAIG BOX: Fast-forwarding perhaps a little bit, were there any lessons from the experience that you have, and the fact that you've lived this life and these sine waves over time, that you've been able to bring not just obviously to the restructuring as you mentioned, but to the technology stack, and how in some fashion, Docker is able to bridge that gap between client and server software in a way that wasn't possible before?

SCOTT JOHNSTON: One of the a-has from LoudCloud was we put up this PaaS platform. Again, it wasn't called PaaS. And customers were finding a lot of friction to deploy their applications onto it.

And we actually spent most of our time helping them get their job application, get their web front end, HTML pulled together, get their Oracle schemas. Getting all that packaged and shipped into the platform took a lot of work. And it was actually one of our most valuable services and most loved services of the product.

But through that lens, that's exactly the struggle that the configuration management tools — Puppet, Ansible, Salt, Chef — were addressing. I was at Puppet in summer of 2013 when Docker, having been open source two or three months prior, started showing up in the Puppet accounts. And Puppet was trying to get workloads from dev to production and then spread out through production as automated as possible.

Docker shows up and radically reduces the amount of work to package all your application, all its dependencies up into an immutable container, and ship it from dev to production. It took one 50th the effort to do what Puppet, Ansible, Chef, and then LoudCloud a decade prior were trying to do. So I guess maybe a better way to say it, Craig, is, having lived through several cycles of trying to help customers get their code from dev to production, when I saw Docker it really lit up my imagination in terms of what's possible. And boy, this solves so many problems that I've seen time and again.

CRAIG BOX: A definite throughline there between, obviously, the places you worked in the past and LoudCloud, PaaS, Puppet, and so on. You land at Docker in 2014. What was the size of the company when you joined?

SCOTT JOHNSTON: About 20 of us at the time — myself, I was the first product person, and Ben Golub, CEO, and then Solomon Hykes, the founder, and that was it.

CRAIG BOX: We mentioned before, obviously Kubernetes was announced at the DockerCon in July 2014. You joined before that event?

SCOTT JOHNSTON: Yeah. I joined about five, six months before. Already, though, Kubernetes and Docker were in conversations. And so as you know, Google's San Francisco office was a couple of blocks away from Docker's San Francisco office. And so lots of discussion going on back and forth in both offices before the actual announcement.

CRAIG BOX: Can you tell me a little bit about the feeling there? Was there excitement that, at a small company like Docker, that someone like Google was interested in the technology and building something based on it?

SCOTT JOHNSTON: What was exciting is Google was validating for us the challenges in the space that Docker was solving. And they were also then, in some sense, a glimpse of the future. Because Google has scaled up to tens of data centers around the world by that point, and thousands of servers per data center. And having operated at scale, they could see the challenges that were coming to operate at scale.

And Kubernetes, as you know the history of it, Kubernetes very much came from that experience. And so the fact that they were bringing that experience and that perspective, what it looks like to operate at scale, and while this container thing is great, how can we embrace as a big ecosystem and help make it consumable and scalable to tens of thousands of servers at a global level? It was pretty exciting.

CRAIG BOX: If there's one thing that Google perhaps was not famous for at the time, there were the four — there were probably fewer than four at that point — but there were the blessed programming languages inside and the single monorepo. And you had to write everything in Google's way, and if you were able to write things in Google's way, also personified by things like App Engine, then you were able to access the massive scaling machine. But if you wanted to write in Ruby or some other language that didn't fit that paradigm, you weren't able to do that.


CRAIG BOX: Docker obviously brought the experience for developers to be able to write whatever they wanted and use tooling that — not to put too fine a point on it — they actually loved to use.

SCOTT JOHNSTON: All credit to Solomon and his cofounders. What Docker brought together, right, was Docker intersected three bodies of tech. On one hand, they took the Linux kernel hard core containers — Linux were, that Google was part of and that Red Hat is a part of. So the Cgroups and the namespaces work, hard-core Linux engineering that two people outside Google maybe really understand.

Then the second body of tech was the copy-on-write immutable file systems, the layered file systems. The third body was the git semantics. And so Docker brought all three of those together into an easily mental model of containers without the developer having to understand Cgroups and namespaces, without the developer having to understand copy-on-write file systems. And that, if the developer was familiar with git push, git pull, guess what? Docker push, Docker pull, Docker commit, It just made sense.

It made it easy for developers to start to use. And then to your point, because the boundaries were very well defined inside and outside the containers, to your point, you could put any language, and its dependencies, and whatever inside the container. Guess what? Didn't matter what happened outside the container. Solomon and team got the abstractions really right in terms of those boundaries, what made sense to a developer, taking advantage of really powerful technologies under the hood.

CRAIG BOX: What was the explosive growth of Kubernetes like, viewed from your perspective, as someone building a part of the stack, but then also as someone who's trying to build their own tooling which in some way does similar things?

SCOTT JOHNSTON: It was interesting because the explosion was clearly bringing in another voice in the community, meaning Kubernetes spoke loud and clear to operators at scale. Kubernetes spoke that language, and you could see the experience of Google in some sense. What's the joke of, "Soul of a New Machine," Tracy Kidder's book? You open up a computer and you can see the organization inside the computer. You can see, in Kubernetes, the Google history, and the types of problems they're trying to solve with it.

So on one hand, it brought a whole new population into the conversation, this ops population that was operating at scale. What was interesting is, we were trying to, in parallel, provide — it was called Swarm at the time — orchestration tooling that was developer facing that was not borne of experience of operating global data centers at tens of thousands of server scale. And so we were trying to solve a problem with folks on different personas and coming from different backgrounds. The surface area of the problem was still very, very similar, so that was interesting to see.

The second thing which was interesting to see, and all credit to Google, is they constructed the Kubernetes product architecture with multiple open interfaces. And each of those interfaces had a pretty distinct community that could then be created around it, so the storage interface, the networking interface, the authentication interface. And one of the a-has of open source, of course, is that you create products that have enough extensibility and enough interfaces that allow the community to take it where they need it to go, and take it to where they can deploy it to achieve their outcomes. The Kubernetes team did a very nice interface design of the product that allowed it to be taken to these different places by these, call it "sub-communities", that were very engaged, very active, and very, very high levels of technical merit and contributions. Those two trends, Craig, were really interesting to see.

CRAIG BOX: There's often been this perception of Google sending messages from the future, as explained by Doug Cutting when talking about Hadoop, or in the Kubernetes space, as Tim Hockin likes to describe it, Google having a crystal ball and saying, here are the problems that you will have in the future. You don't have them now.

But someone who's looking at Kubernetes at the beginning will say, oh, it addresses all of these things, and I don't have these problems. I'm not worried about it. And that suggests an umbrella under which someone can come and bring a lighter weight product. But unless there's a pathway between the one and the other, as we've seen in the case of Kubernetes, that you will need to have that technology. So you may as well think about it today.

There are other vendors in other adjacent parts of the cloud native stack who are still doing a similar thing. They are coming in and saying, here is a simplified version of this particular product, or we're only going to address things as far as the developers are. Do you have any advice or thoughts for them in that particular market space?

SCOTT JOHNSTON: One of the lessons learned in startups, every cycle we go through this and learn it — sometimes the hard way — is a platform can be a very, very powerful tool. But it's very hard to insert in market as a platform. A platform with a solution on top of it — a fixed solution that the ecosystem can get their heads around, and is a smaller surface area to attack and add value, whether it's developer, operator, security, like it doesn't matter, right? Need to be able to solve a problem with that solution built on the platform.

And if that is effective, then guess what? The platform gets pulled in to the organization. And the vendor, the platform sponsor, can choose to expand from there. But to go into a market, or as you said, go into the CNCF space with a pitch that leads with a platform for a particular surface area or particular problem set is really tough.

I would go in and solve very tactically, very specifically, very surgically, go in and solve a real problem for real users. OK, it's built on a platform? Great. That's how you sneak in from there.

CRAIG BOX: You mentioned before, experience with restructuring. What can you tell us about the start of this process within Docker, the acknowledgment that something would need to change?

SCOTT JOHNSTON: There's only so much that I can say about those conversations. But maybe one observation that's relatively public is we had this ongoing phenomenal, bottoms-up adoption consumption, word-of-mouth of the developer facing side of the product. And this is the content from Hub, and it's the Docker Desktop, and the build tools, and Compose, relative to the growth in the commercial business that there is an impedance mismatch there. That's probably the best way to say it.

And so you had this beloved brand, so much energy, so much excitement, so much growth over here on one side, all free, 100% free. And you had OK growth on the other side of business. But you'd stand back and say, is this company commercially really living up, or really fully realizing the value that's being created in this industry?

I mean, think about it, 2012, there was no really democratized container industry. A year after the thing is open sourced, you have the Cambrian explosion of innovation and ecosystem, and everyone is standardizing on this. And let's take a quick diversion, Craig, to acknowledge, we as an industry, we have not balkanized or fragmented. So if we look back at the Unix industry, so Solaris, HP-UX, AIX, applications written with no interoperability.

Unfortunately, Linux hit the same thing. So you write to, SUSE; you can't run on RHEL, can't run on Ubuntu [by] Canonical. Maybe we should knock wood. Thus far, we don't have that balkanization in the container industry. You write an application in an OCI, compatible Docker image, that can run on AWS, EKS. It can run on AWS ECS. It can run on GKE, 100% compatibility.

CRAIG BOX: And I think it's perhaps worth giving credit to Docker there for the work that they did to standardize that, to found the OCI.

SCOTT JOHNSTON: Standardize it, and listen to the community, and move it forward. And everyone has embraced that and run forward. So that's another lens with which to look at your question, which is, 2019, you look at this entire industry that's compatible, that everyone is growing, and ecosystem partners are growing.

And there's free stuff, but there's a lot of commercial success there as well. And what has the original or initial program sponsor, how far have they made progress commercially on that? That discrepancy, or that impedance mismatch is one of the places you start to have these conversations, to come full circle to your question.

CRAIG BOX: And as I mentioned before, the people who use these tools love them. And the fact that it's got the git semantics, and so on, and that it all just works and moves, there is a huge emotional attachment, I think, from the community to the word Docker.

SCOTT JOHNSTON: That, and I'll say the whale logo goes far, because it's approachable. It's friendly, notwithstanding about being beached. The way we think about that is that that's a precious, precious resource. It's a blessing when you have users that are so passionate and they're globally talking about the product, and globally doing meetups on their own behalf and such.

As we went through this adventure since the restructuring, we wanted to preserve that. And so the vast majority, like 80 plus percent, which is over 8 to 9 million developers, are using the products completely for free. And that is by design.

So we just wanted to make it accessible to anyone, from any region in the globe to try it out to solve one of their problems. Or if they're inspired by something, to build and develop in a container, in a stack. That means that we have had to find ways to make a commercial business by focusing on developers. And you've seen those journeys over the last 2 and 1/2 years.

But that's resulted in us being able to then double down and fund more things for developers. And so you've seen us fund Doctor Desktop on Apple silicon on Mac. You've seen us pull forward performance improvements for VirtioFS in the Mac file system. The announcements that we had at DockerCon this week are about that as well.

So this, by a commercial business that works with a product that's focused on developers, the monetization isn't the developer him or herself. The developer gets the same great cloud native tools, which include Kubernetes, includes Compose, includes Docker Engine, regardless of whether they're on a free plan or paid plan. It's the features that security folks want that are part of those paid subscription plans.

So it's not monetizing the developer. It's organizations, large organizations need things like SSO. They need things like secure software supply chain management, and that's who the subscriptions serve.

CRAIG BOX: That does seem very much like the standard approach to a SaaS company today, in that you build something and give it away for free. And then you charge more for the enterprise features on top. The enterprise features, in this case, being obviously the developer focused Desktop features, rather than an enterprise hosting platform.

It feels obvious, in retrospect, the idea if you want to make money for something, ask people to pay for it. Was that something that was ever considered at the time when you were building out the enterprise tooling, that you could just be a tools vendor in that space? Or was the community of VCs and the explosion such that the expectation was perhaps different on you as a company?

SCOTT JOHNSTON: I'd share an answer like this, that there's a thesis that there were more dollars on the operations side of the business than there were in the developer side of the business. So let the developers use the free stuff, create all these applications. And let's go monetize all that energy by focusing on the operations side.

As you know, that can be an expensive sell. That is, in some sense, one of the lessons learned. And I also want to be clear, Craig, my fingerprints are on many of the decisions that were made in Docker 1.0, or Docker first generation. And so this is by no means finger pointing at any other executive or any other person around the table, because I was there in the seat and had my fair share of decisions as well.

CRAIG BOX: Well, I feel that they're all completely defensible decisions based on the market at the time, and no one knowing where things were going to go. There was Mesos at play, a very successful projec. That said, you could have just run ads and then the VCs would let you lose money forever.


CRAIG BOX: It must be quite gratifying for you then that having taken a $105 million Series C round recently, the company is now valued at $2.1 billion, with just the developer tooling division in effect, which outstrips the peak valuation of Docker 1.0, which was $1.3 billion in 2018.

SCOTT JOHNSTON: As you know, VCs invest expecting a return. So the bar just got higher with the investment. I'm very grateful and feel blessed, honestly, that the team that has been with Docker since the restructuring, that we've reached this milestone. Because it acknowledges all of their hard work.

They took risks back in 2019 when there was a lot of uncertainty, a lot of questions about the company's future. And yet, they believed. They believed — we believe — that there's a market here for developers. We believe that the brand and the love of the brand was there, and that we could build a sustainable business around that.

In 2019, there are more non-believers than believers, there then, Craig. The fact that 2 and 1/2 years later, we have this milestone, I think, is recognition of the team's hard work and just fantastic focus on developer, focus on developer needs, focus on finding an authentic monetization model, a fair monetization model that's not monetizing the developer but monetizing the needs of managers of developers, and security folks who manage developers, that's enabled us to reach this milestone.

And then honestly, we're now looking forward. It's like, OK, we're serving 16-some million monthly active devs, depending on how you count. The market for developers, depending on who you talk to, is 26 million this year. It's going to 45 million this decade. And that's where our eyes are on, Craig.

We're going to go focus laser like on, what are the needs of these next 20-some million developers coming into the market? And what can we do together with the community? Which, as we've talked about, very active, full of innovation, great, complementary tools. What can we all together do to serve these next 20 million developers up to 45? And that, to me, is what this milestone enables us to do. It gives us extra, whatever we want to call it, cash in the kitty, powder in the keg to go forward, go faster, be more aggressive, take more risks to serve these developers and grow that market.

CRAIG BOX: When you were the combined company, there was a split out of some of the open-source tooling from the capital D, Docker brand, to a new brand called Moby. How is Moby doing as a project?

SCOTT JOHNSTON: Moby is a whale, so he's not alive and kicking. So Moby is alive and swimming. Or what do whales do with their tails? Flapping his tail, yeah. Moby is alive and flapping his tail.

So many great projects are there and have come from there. So containerd is a great example of a project that came out of Moby. Buildkit is a fantastic technology up in Moby that is getting a lot of external contributions and has already been very beneficial to us and many other ecosystem partners, and so on. So I'm very pleased with the community engagement around Moby and how that's been growing.

CRAIG BOX: The musician Moby is still touring. Did you ever consider getting him to be the headliner at DockerCon?

SCOTT JOHNSTON: Shame on me, never did. But after this podcast, I'm going to give him a call.

CRAIG BOX: He'll be on the bill for next year.

SCOTT JOHNSTON: That's right.

CRAIG BOX: Ice Cube and Moby, what a combination that would be.


CRAIG BOX: There was a lot of noise about how last week's Kubernetes 1.24 release removed dockershim. That's been going on for a long time with people thinking that the is falling and not quite understanding the distinction between Docker the binary, which builds applications, and can run them if you want, and containerd and the projects that came out of Docker that were designed solely to run containers.

SCOTT JOHNSTON: That's right.

CRAIG BOX: What does that mean for you, or commercial customers, or perhaps people who are running Kubernetes as part of Docker Desktop?

SCOTT JOHNSTON: The short answer, Craig, business as usual. No change. That's really the straight-up answer, meaning, as you hinted just there, the same Docker OCI container images that were running yesterday on Kubernetes 1.23 that was orchestrating and placing them on Docker engines to run, those same images run just fine on Kubernetes 1.24 that is now routing and placing those container images to run on containerd.

There's absolutely zero change from a developer standpoint. And in fact, just to get into the stack a little bit, containerd is actually inside Docker Engine. So containerd was already running container images in Kubernetes 1.23 before the removal of dockershim in Kubernetes 1.24. And so it eliminates one hop in the routing and placement of containers that, to many, is important. But from a developer standpoint, and from an application architect standpoint, and really, 99% of the users, developers, and operators in the ecosystem, it means exactly zero change.

CRAIG BOX: This week, of course, we have celebrated DockerCon, as we've mentioned many times. DockerCon was, at one point, a big physical event in a physical space. We look forward to those returning back at some point.

SCOTT JOHNSTON: Some day, yes.

CRAIG BOX: As the company has changed from a focus on enterprises to focus on managers of developers who perhaps run enterprises, what does that allow you to do with the format of the event, aside from the fact that we've had this enforced virtual world?

SCOTT JOHNSTON: In some sense, the enforced virtual world that COVID brought was a real blessing because it forced us to rethink DockerCon. It forced us just to radically say, OK, if this is 100% virtual, what is that? What it's done is probably no surprise, is that our reach is now just global.

We have over 200 countries participating in DockerCon, which has resulted in an explosion in attendance. We had 80,000 participants in 2020, another 80,000 in 2021. We're expecting the same order of magnitude this year. And they come from all over the globe. They don't mind waking up at whatever hours or whatever time zone because they're so excited to participate in this community, going back to earlier threads about just the love of the brand, and the love of the product, and the love of trading tips, and the whale, and tips, and tricks, and best practices. There's just so much positive energy there that they'll wake up whatever time to join.

And then led us down to, oh, wait a minute. How about community rooms with different languages as their core language? This week, we had eight different community rooms with eight different native languages, in addition to the English main stage. The other thing that it's also, from a format standpoint, which is super interesting for those of us that used to go to live performances only or live shows only, is that we mix it up with a combination of live stream as well as recorded.

And by doing the recorded and then playing that back with the speaker in the live chat alongside, it allows the participants to make comments about the talk. But then they're going back and forth with the speaker in real time in the chat. And if we think about the bad, old days of a live show, you remember those long lines that people would line up after the talk? You'd go rush the speaker and you'd be 20th in line trying to talk to the speaker about a thing.

Now, guess what? You've got 1,000 people that are going back and forth with the speaker in real time as the talk is unfolding. So it doesn't replace the face-to-face. So I don't want to underweight the importance of that face-to-face human connection, from a glue standpoint, from a getting to know you standpoint, from a just social trust standpoint, we're all humans and people at the end of the day.

But the reach, the openness that it allows students in time zones half a world away, they can interact with these speakers that before they wouldn't have a chance. Because they would not buy a plane ticket. They would not fly around the world. They would not come and sit in a seat. This might be a highbrow label on it, Craig, but it's really democratized this type of conference and engagement at a global scale. It's pretty exciting for us to see.

CRAIG BOX: From high brow perhaps now to the pun with which we were announced, we are krilled to invite you. Can you put me in touch with your pun people?

SCOTT JOHNSTON: The puns just write themselves, Craig. I got to say, it's very krilling to be on this podcast.

CRAIG BOX: I'm going to now tell you one of my favorite New Zealand jokes of all time, as told by the famous Billy T. James. Where do you go to weigh a whale?

SCOTT JOHNSTON: We're having a whale of a time on this podcast, too. I don't know where you go to weigh a whale, though.

CRAIG BOX: The whale weigh station.

SCOTT JOHNSTON: We actually have whaleness days at Docker, Craig. And so in the times of COVID, when working from home is an endless Zoom meetings, it's tough. We've given all employees extra days during the year and we call them whaleness days. Not wellness days, whaleness days.

CRAIG BOX: I Hope that doesn't mean people sit and eat a lot of food. It could, whatever makes people happy.

SCOTT JOHNSTON: It could. We do ship snack boxes. Maybe we should ship krill in the snack boxes. What do you think, seaweed krill in the snack boxes? They wouldn't get eaten then probably.

CRAIG BOX: Just no "chups".. There's a link in the show notes for everyone who won't understand what I just said there.

Let's talk about the announcements from DockerCon this week. First of all, Docker Extensions. Previewed recently, they offer the ability for vendors to integrate with the Docker Desktop platform. What are some of the things that you've seen come out of that that perhaps you didn't expect?

SCOTT JOHNSTON: The thing that Docker Extensions was solving for developers, we've all seen that CNCF landscape map. Really inspiring, all the 1,000 logos, and boxes, and categories. Also, potentially terrifying. Which tool should I use?

CRAIG BOX: It's a bit more like a magic eye diagram than a roadmap.

SCOTT JOHNSTON: It's inspiring and eye-opening at the same time. A developer looking at that, it's like, OK, wait a minute. Which tools can I use? And where should I find them, and what configuration?

Docker Extensions is designed to meet those developers and say, look, here's tools. You can one-click install. We're going to do same defaults. We're going to make sure they're secured, locked down, tested, and they integrate with your local inner loop cloud native development toolset, which includes Docker Desktop, includes the Kubernetes stack that's local there on Docker Desktop, and just makes it easy to start exploring and consuming tools and get back to work developing your app.

And it helps you really get down to focus on your apps, versus searching for tools, trying to find dependencies, trying to learn in forums about how to install effectively. It just lets you do one-click, install the tool, and get back to work. So that's, broadly speaking, what extensions do for devs.

The categories that we found that were super interesting and emerging, one was around dev collaboration. So there's a number of partners — Tailscale, Okteto, Ambassador Labs — who provide tools that help development teams collaborate with each other, kind of laptop-to-laptop, as well as laptop to shared dev cluster. And right now, to set up those tools, you have to really know your networking business. You have to be able to get into the weeds, and understand how to configure and set it up.

Now, one click, your containers on your laptop — whether it's a Mac, or Windows, Linux, doesn't matter — are talking to the containers on your friends', your colleagues' laptop, regardless of whether that's a Mac, Windows, or Linux, Docker Desktop. Or it's talking to your shared dev cluster up in the cloud. Just the ease with which you can start doing these hybrid modes of collaboration in your dev team is out-of-the-box exciting and super, super interesting to see.

The second big category, Craig, is simplifying development of apps that are destined to be deployed on Kubernetes. We've got five partners that have brought a really awesome extension experience. One click, you get all of your tools set up. We demoed VMware Tanzu tooling onstage. Red Hat has a toolset as well. Ambassador Labs has a toolset, Mirantis has a tool set. In one click, all of your Kubernetes dev tools are installed, ready to go, configured, locked down so that you can just focus on developing your app, versus again, trying to fuss with, and configure, and automate your tooling.

The third category is around securing your containers that you're building. We call it secure software supply chain. And there we had four extension partners build great extensions that, one click, now you've got scanning automated, built into your dev environment. So your inner loop is automatically scanning your containers as you build them. So those are the three big use cases that, in these first 15 partners or so, highlighted to us the potential for Extensions.

CRAIG BOX: You mentioned there you can run against Docker Desktop on Mac or Windows, obviously, but also Linux. And that was the second big announcement from DockerCon. Tell me a little bit about the market for developing with a desktop environment on Linux, versus obviously the command line tooling, which has been Linux native from day one.

SCOTT JOHNSTON: Super good question. It's one that is not surprising, which is, why bother? Because Docker Engine, open source, Kubernetes, open source. All that is available on Linux today.

And what was interesting is we saw the need for a one-click install and automated update process for all those tools. And so today, as you're probably aware, on Mac, and Windows, and now on Linux, Docker Desktop downloads the Docker Engine, downloads Kubernetes, downloads Docker Compose, build kit, all these cloud native tools, and configures them. But not just one and done, it also provides automated updates of the patches, of the security releases, of the version bumps.

That ease of install and update, turns out Linux workstation developers want as well. By our count, about 20% to 25% of developers are using Linux workstations. And then here's another signal, Craig, that really encouraged us to move forward is the Docker Desktop for Linux has been the number one requested feature on our public roadmap, which is up on GitHub, for the last 12 months. And it's consistent. It's been consistent up there, and more and more people have piled on as they've seen the features come into Docker Desktop for Mac and Docker Desktop for Windows.

And so between those factors, plus so many organizations that are adopting Docker Desktop across their thousand of developers, they say, look, I want to be able to manage not just my Windows and Mac developers, but I got 20% of my developers on a Linux workstation. I want to be able to manage that whole fleet very, very consistently. And I want a consistent development environment so my Linux devs aren't pointing at the Mac and Windows guys, saying, hey, it works on my machine, or vice versa.

So now you have the same managed Linux VM across all three platforms. You got the same environment variables, the same configurations. Guess what? All those apps just now work seamlessly across all three platforms.

CRAIG BOX: Perhaps it will be the year of Linux on the desktop after all.

SCOTT JOHNSTON: Ooh, well played. Yes. From your lips to God's ears.

CRAIG BOX: You talked before about how the commoditization of running binaries on Linux allows you to take your OCI image and run it on whichever kernel you want. When it comes to running desktop integrations there are different desktop environments, and there are differences between distributions. Were you able to distribute Docker Desktop as containers and run it on multiple different distributions? Or have you had to constrain yourself to a smaller number to support?

SCOTT JOHNSTON: It actually turned out to be the latter for this launch, Craig. And so we're on Debian, Ubuntu. We're on RPM, Fedora, and we're on Arch. And I forget the packaging manager for Arch. So we're launching on those three. Oh, I'm sorry, Raspberry Pi as well.

So Raspberry Pi, we did a special packaging distribution for Raspberry Pi to help those students just get going out of the box for free. And so we're supporting those four out of the gate. We're going to listen to the market and add more versions as the requests surface.

CRAIG BOX: That raises an interesting discussion about the fact that people are now developing locally on Arm in a way that they weren't previously. How are you seeing that adoption? You've obviously got Docker Desktop running on Apple silicon now.

You've got some cloud providers running services. Do you think there will be a split world where people are developing locally on one architecture and running remotely on another? Or do you see consolidation, and people will need to have the same local environment that they're going to deploy into?

SCOTT JOHNSTON: Before I answer, we also see another signal, Craig, which is interesting, which is Docker Hub supports multi-arch images. And so the rise of images that also have an Arm build as part of the packaging is also exploding, just going through the roof. So I think the answer is, and as you probably know, buildkit allows for multi-arch builds locally.

And so we think that that will be the new normal for the next, call it two to four years, where, if you're an x86 dev, you're going to start producing x86 for your local inner loop. But you'll also start producing an Arm once you get to the final snap for deployment on a Graviton or your cloud provider of choice. Because it seems, just from a tooling standpoint, that not all tools are multi-arch, multi-platform yet. And so because of that, I don't think, we're not yet at a tipping point where everything rails to Arm, or everything kind of stays on x86, is my current reading of the tea leaves.

CRAIG BOX: You've crossed off the number one request on the roadmap with Linux on the desktop. Where do you want to see Docker going next? What are the next items that you feel like you need to address? Or where do you think the tooling could go over the next 12 months?

SCOTT JOHNSTON: I'll say it's both user requests, but also where we see the market going. The one thing that's so exciting about our industry is that devs will take you to all these interesting places you had no idea that were there and that were exciting. As evidenced this week at DockerCon, devs are already using Docker tooling and Docker formats for emerging technologies in addition to containers. And by that I mean—

CRAIG BOX: WebAssembly.

SCOTT JOHNSTON: Yeah, WebAssembly, as an example. Functions, right? So functions as a service, or serverless, you're seeing the inner loop done in Docker OCI containers before it's then pushed to the cloud providers function as a service. Web3, not to jump on the hype train, but you're seeing Web3 applications being produced with Docker tooling.

And so that's where we want to take it, Craig, and emphasize to the developer community that you've developed skills, you develop tool chains, you develop pipelines, infrastructure to handle containerized applications. And we're going to help you adopt these next new waves of technology if your organization is looking to those to help you in your next application stack. That's out there 12-plus months, as you might expect.

CRAIG BOX: Finally, Scott, if you weren't in tech, what would you be doing?

SCOTT JOHNSTON: I'd be a ski instructor, Craig, in Switzerland. I love the mountains, love the wide-open spaces, love the balance, the flow state when you're right on the edge, literally and figuratively. That'd be a decent alternative to tech. Love tech, but that wouldn't be a bad second.

CRAIG BOX: There's a big conference in Spain next week if you're looking for an excuse to go to Europe.

SCOTT JOHNSTON: I'm going, Craig. Are you going?

CRAIG BOX: Of course. We'll see you there.

SCOTT JOHNSTON: I'll see you there. Look forward to it.

CRAIG BOX: All right. Well, thank you very much for joining us today, Scott.

SCOTT JOHNSTON: Craig, it was a pleasure. Enjoyed the conversation. Be safe, be well.

CRAIG BOX: You can find Scott on Twitter, @scottcjohnston, and you can find Docker at Docker.com.


CRAIG BOX: Thank you very much for listening. I hope you enjoyed this roundup of DockerCon, and I hope you're looking forward to KubeCon next week. Please help us spread the word and tell a friend to listen to the show.

If you have any feedback, you can find us on Twitter @KubernetesPod, or reach us by email at KubernetesPodcast@google.com. If you're in Spain, you can also give feedback over paella. You can check out the website at KubernetesPodcast.com, where you will find transcripts and show notes, as well as links to subscribe. See you next week.