#141 March 10, 2021
Crossplane lets you automate creation of infrastructure using Kubernetes APIs. Daniel Mangum is a Crossplane maintainer working at its creator Upbound, a TL of Kubernetes SIG Release, and a YouTube streaming star. He chats about tech with host Craig Box, who is helped this week by returning guest Ken Massada from GKE’s Support team.
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box with my very special guest host Ken Massada.
What I like about having guest co-hosts is getting to catch up with people I haven't had a chance to talk to for a long time. Ken was our guest in episode 18, all the way back in August 2018. And it turns out it's been so long since I've spoken to him he's gone and started his own podcast.
KEN MASSADA: Hi, hi, hi. Hello. How are you? It's such a pleasure to be here.
CRAIG BOX: Thank you. Pleasure to have you. Welcome back.
KEN MASSADA: Thank you, thank you, thank you so much. It's been a long time, like more than 100 episodes since the last time.
CRAIG BOX: Tell me about "Things We Don't Say."
KEN MASSADA: All right. So the "Things We Don't Say" podcast is a social and cultural podcast. We all needed something to do through the pandemic to keep us sane. That's sort of what I ended up doing. It has nothing to do with tech. And I did that on purpose. And it's just conversations between me and people that I'm very familiar with, where we get very vulnerable and open about all sorts of topics.
CRAIG BOX: Yeah, I have to say-- I had to listen to a few of them-- there's a whole bunch of conversations that I wouldn't normally hear in my day-to-day life but I have just been so enlightening to drop in on, conversations about friendship, like you say, vulnerability, art, masculinity, about being Black in the US. I encourage everyone to listen, especially to the last couple of episodes, and tell me that you don't wish you had a friend like Ken's friend JP.
KEN MASSADA: Oh, snap. Real shout-out to JP. He would absolutely love this when he hears this.
CRAIG BOX: You sound like you've been having these interesting conversations with friends for some time. What inspired you to start recording them?
KEN MASSADA: Honestly it was just a conversation. You know, I've always had an artsy side to me. And my sister, one time, asked me what I did for fun. And it was a tech blog and so, after that, I started to find maybe like I needed a hobby that didn't have to do with tech. I didn't need to eat/breathe tech every day. And so that's sort of why I picked that.
And I like music. On the podcast, I source original artists, kind of go around and ask them to let me feature their music and stuff. So anything I can do to bring more of the arts into my life. That was the main reason why I did that project.
CRAIG BOX: Well, I will encourage everyone to pause their podcast player right now, go search TWDS. One thing you'll learn listening to the show is that Ken has quite the love for New Zealand.
KEN MASSADA: I do. [CHUCKLES] When I can tell it apart from Australia. [CHUCKLES]
CRAIG BOX: You live close enough to Canada that it's easy to get the two confused.
KEN MASSADA: Yeah, they look like really small pieces of land in the middle of the water that are so far away. [CHUCKLES]
CRAIG BOX: Yeah. Drop yourself in the middle of Australia and try calling it a small piece of land.
KEN MASSADA: [CHUCKLES] True. Granted, granted.
CRAIG BOX: I don't know if you caught the news from New Zealand over the last week, but there were two news stories that happened. The first one was that they discovered "glow in the dark" sharks off the coast of New Zealand.
KEN MASSADA: Oh, that is so amazing. I saw the picture of the shark. And I want a print of that. That's really cool.
CRAIG BOX: And then, of course, the second news story was that there were three 8.2 magnitude earthquakes and tsunami warnings. And here was I worried, oh my god, they're going to pick up all the "glow in the dark" sharks and throw them on the land. Everyone has to run inland to avoid shark attack.
KEN MASSADA: You know what, that's also another fascinating thing about New Zealand. I saw videos clips of that. And people stopped their cars and got closer to the water. I’m like, What are you guys doing? [CHUCKLES]
CRAIG BOX: Just checking to see if anything interesting happens. There's not a lot of danger there. There's not a lot that'll kill you there's no spiders. The sharks, you probably have to, like, punch them in the face or something for them to bother attacking you.
KEN MASSADA: Nothing. Just let's get closer to the water as the ground is shaking because, well, we're from New Zealand. You see? That's why I love that place.
CRAIG BOX: They've got no COVID. They've got to invent other forms of danger.
KEN MASSADA: True, true. [CHUCKLES] And an amazing prime minister if I must say.
CRAIG BOX: Indeed. Let's get to the news.
CRAIG BOX: Microsoft announced a number of AKS launches at their annual Ignite event last week. Azure Arc for Kubernetes, which let you attach any Kubernetes cluster to their control plane, is now GA, as are enclave-aware secure containers, managed active directory access, and the application gateway ingress controller. As you migrate, launch an app containerization service in preview, which lets you migrate ASP.NET to Windows containers and Tomcat apps to Linux containers.
Microsoft Mesh also made headlines, but unfortunately it's about VR and not sidecar proxies.
KEN MASSADA: Some updates from open-source projects created by Microsoft's Deis Labs team. Helm has passed its second security audit, focusing on its threat model. Brigade, an event-driven scripting engine for Kubernetes, is starting its march to version 2, where Kubernetes fades into the background as an implementation detail and it introduces its own APIs.
CRAIG BOX: The Harbor container registry has released version 2.2, adding Prometheus telemetry and robot accounts that can run automated actions as long as that action is not ticking a checkbox. The project published a look back at 2020 and a roadmap for 2021 including a lighter edge version and IPv6 support.
KEN MASSADA: Google Summer of Code has launched for 2021 with 202 mentoring organizations, including the CNCF, Ceph, Cilium, and that's just the ones starting with C. If you're interested in signing up, applications will be open on March 29 and run until April 13.
CRAIG BOX: The CNCF has published the schedule for KubeCon Europe 2021. The best news for people like myself in Europe-- or, after Brexit, Europe-adjacent-- is that the event is actually held during European working hours this time around. The programming committee evaluated over 600 sessions and accepted 95, for a 15% acceptance rate. The schedule's selection process came under some debate on Twitter. And in response, the CNCF posted explaining exactly how it all comes together.
KEN MASSADA: Finally the Kubernetes project continues to demonstrate incredible momentum, yesterday hitting 100,000 PRs and issues on GitHub. Thank you to everyone involved, except perhaps the guy who submitted that issue number 100,000.
CRAIG BOX: And that's the news.
CRAIG BOX: Daniel Mangum is a senior software engineer at Upbound, where he is a maintainer of Crossplane. He is also a technical lead for the Kubernetes SIG-Release. Welcome to the show, Daniel.
DANIEL MANGUM: Thanks for having me. I'm super excited to be here.
CRAIG BOX: Thank you. You went to school at a time that Kubernetes existed. A lot of the people we talk to on the show obviously had to go about creating it, for example. So they pre-date this a little bit. Slightly grayer hair than you perhaps. So first of all, what's a CS degree like these days?
DANIEL MANGUM: I think it really depends. Mine was very formal, so a lot of theory. So typically you'll go through the algorithms and data structures part of the degree. And then I saw, generally, at least at the university I was at, that most folks went more towards the sort of web development path and more folks went to some of the distributed systems or potentially lower-level stuff.
So there's two kind of main tracks that you could go on. But everyone had that foundation of the fundamentals of theoretical computer science.
CRAIG BOX: Was that, you think, based on people's preference for careers afterwards? Or was that what you were being taught, those two tracks that people took?
DANIEL MANGUM: I think it was some of both. So a big trend that you probably see in lots of universities, at least in the US is where I can speak to, is lots of folks taking on computer science as an additional degree or minor or something like that, with a business degree or something. And that allows them to sort of put on their resume, I am competent with computers or something like that.
CRAIG BOX: I know both ends of the blockchain.
DANIEL MANGUM: Exactly. So you see a lot of folks that were doing that, folks that were more interested in potentially working on distributed systems or lower-level stuff like firmware might go towards computer engineering or some of the lower-level computer science classes.
CRAIG BOX: Now, students quite often do internships throughout their degree. And you did one writing Visual Basic for applications. Surely you could have had a very different career if it followed that path.
DANIEL MANGUM: I think so. For my age, which you alluded to earlier, I'm a bit of a nostalgic engineer, I'd say, in terms of I have been given the benefits of coming into the industry with lots of really wonderful abstractions that allow us to do things really fast and efficiently, Kubernetes being a great example of that.
But I listen to a lot of folks who have been in the industry for longer and hear them opine about older days where they wrote Visual Basic or they were programming their 6502 or their Commodore 64 or something like that. And I, personally, being a little bit of an oddball in that regard, long for that sort of experience that I just skipped right over.
So the Visual Basic experience, that was not my favorite position that I've ever held, but it was definitely illustrative of some of the more rudimentary ways of programming that folks that are more mature in the industry went through before we've reached today.
CRAIG BOX: A lot of people are obviously into the retro scene, especially at the moment. And there's even an article in "The New York Times" about how big it's been over the lockdown. Is that something that, not having grown up, necessarily, with that technology, that you find interesting?
Things like that are definitely super interesting to me. And every time you go back to something like 8-bit computers, you get more context when you're designing at the higher level about what's actually happening behind the scenes. So those are definitely things that are attractive to me.
I do have, once again, talking of abstractions, I've taken a passion for FPGAs. Because you can design your own hardware and flash different hardware onto an FPGA. So you can experience a lot of these different softcore CPUs that may be very old or not actually used in industry today and have that same experience of programming with them.
CRAIG BOX: I'll recommend a conference talk to you from 2016. There's a guy called Jason Turner who put a talk out called Rich Code for Tiny Computers. He's nominally talking about C++. But he's on stage effectively writing a game for the Commodore 64.
DANIEL MANGUM: I'll have to check that one out.
CRAIG BOX: How did cloud native come into your world, then?
DANIEL MANGUM: A variety of different ways. Some of them were subsequent internships where I saw organizations starting to shift, which we hear very frequently, moving from a monolithic, maybe on-prem application to cloud native microservices. All the buzzwords you want to throw in there.
So I was exposed to that through some of my internships. And then, also, universities, though slowly, are moving computer science curriculum to take advantage of different cloud providers. So while you may have had a computer lab at a university a few years back, now they'll give you an AWS account. And you'll get your EC2 instances and do your work on virtual machines in the cloud.
So that's kind of how I was introduced to that. And I also started to see this bifurcation of getting access to low-level infrastructure and then super high-level infrastructure. So there's kind of the layer 1 cloud providers, if you will, which would be AWS, GCP, et cetera. And then there's all these services built on top of that, whether it's Heroku, Netlify, Render is another one. And they give you these high-level primitives. I mean, I was really interested in taking those lower-level constructs and composing them into higher-level primitives and the tooling to do that.
So a big tool that was popular in open source when I was in school was Terraform. So I started contributing to that and got a little bit of experience with that, and was kind of fascinated with the enhanced workflows and different developer team structure that facilitated.
CRAIG BOX: We spoke with Jared Watts from Upbound in January of 2019. We actually recorded that chat. It's a secret here. Not everything goes out the day that we actually claim it did. But we recorded that chat at the end of 2018. And that was just before Upbound had launched Crossplane. So first of all, how is Jared doing?
DANIEL MANGUM: Jared is doing great. He has taken on a lot of administrative roles and is currently stewarding Crossplane through the same CNCF ladder that Rook went through. But yeah, he was excited to hear that I was coming on here. And I'd better live up to his great performance in that episode.
CRAIG BOX: Why do you think he didn't tell us about it when we were on the show? Did he think we couldn't keep it a secret?
DANIEL MANGUM: You'd have to ask him directly. I've listened to the show for quite a long time. And you seem very trustworthy. So I'll follow up with him and make sure there's no ill will there.
CRAIG BOX: That's very kind. Upbound was founded by people with a background in storage. And indeed Rook was the first product out of Upbound as a storage product. Was the intention always to build something like Crossplane? Or did one sort of come out of the experience from building the other?
DANIEL MANGUM: There was actually another startup that was built around Rook initially, same founders as Upbound. And that was super focused on the storage area. And Rook came out of that. And Rook has received great adoption. Lots of folks are using it in mission-critical production scenarios.
The next gap that the folks who started Upbound saw in the market was Crossplane. Crossplane initially was billed as this-- "multicloud control plane," I believe, was the tagline for it when it was first announced. It's kind of evolved over time, as the concept of multicloud has as well.
But the folks at Upbound, Crossplane was the main purpose for starting the company. And the commercial products built on top of that are all based on Crossplane itself. It was 100% the motivation for starting a company and getting the open-source project going.
CRAIG BOX: Now, I was going to ask you about that, realizing of course that you weren't there at the time of the foundation of the company. But I remember reading the blog post announcing Crossplane and hearing it described as a multicloud Control plane and not really understanding it.
I think that the messaging has obviously changed over time. Has the product changed? Or is it just how it's being described?
DANIEL MANGUM: I think the answer to that is yes and no. So the product has definitely changed. And when I say that project, I'm really just talking about the open-source project Crossplane. It's changed over time. However, the ability to do multicloud with Crossplane has not been diminished as it's changed over time. However, lots of organizations that have come to use Crossplane are not super interested in multicloud.
I think there was a period in time, especially after the launch of Kubernetes, where folks all Kubernetes as this thing that was going to make everyone use every cloud. And they were all going to be interchangeable. Crossplane was built on that same mantra. But it's moved to where being multicloud was at the forefront of the value proposition, and now it's just an implementation detail. So as we talk about Crossplane's structure and stuff, later on in the show, we'll see how, while it can still facilitate a multicloud strategy within an organization, it's definitely not specifically built for that. And it just happens that that's a really good use case.
CRAIG BOX: Let's get into the details then. The Crossplane website introduces it as an open-source Kubernetes add-on that supercharges your Kubernetes clusters, enabling you to provision and manage infrastructure services and applications from KubeCTL. Did I pronounce that last word right?
DANIEL MANGUM: No comment.
CRAIG BOX: That's obviously the marketing copy. Is that how you would describe Crossplane?
DANIEL MANGUM: I'd describe it pretty close. I'd probably shy a little more away from applications. And we can talk about some of the projects that we integrate well with to manage some of those applications. But I know that that can be a bit confusing for folks. But that's pretty spot-on as far as what Crossplane will do for an organization.
CRAIG BOX: Let's then jump into what I'm going to call the inevitable comparison section. Because no matter what it is that you launch, everyone's going to look at it in terms of, well, what's a thing that's like this that I can relate it back to? The first thing that we'll come to is something you mentioned up front that you worked on, is Terraform. Terraform is a configuration language that lets you define some infrastructure and then a program which runs and then executes that. That's obviously a one-off thing. You run your Terraform job and it creates your infrastructure. Could that be run as a Kubernetes control loop, or is that inherently a one-and-done system?
DANIEL MANGUM: Terraform itself, you're exactly right, has a start and an end when you want to run a terraform plan and terraform apply. And I'll say, for folks that are interested in this comparison in more depth, one of our maintainers actually put out a blog post last week comparing-- I believe it was called Crossplane versus Terraform, that compared some of these points in finer detail.
But just to kind of touch on them at a higher level, you went ahead and talked about that Terraform has a beginning and an end. Could that be put into a reconciliation loop? Absolutely. So you could take a Kubernetes controller and have it run terraform plan and apply, every minute, for eternity, and have it react to any changes you make in the cluster. Terraform was not necessarily really built for that.
And there's a couple reasons for that around Terraform's way it handles state and that sort of thing. Crossplane takes an approach of shifting that to a Kubernetes-native model, where we have controllers running that are talking to these cloud providers. Everything is represented as Kubernetes custom resources. So they're being stored at CD, just like you would pod or deployment or something like that. So the first differentiator is absolutely that reconciliation where you get automatic drift detection and that sort of thing. And it will drive your state back to what you've specified in the cluster.
CRAIG BOX: Another product in the space which came out similar kind of time is Pulumi. Pulumi describe themselves as infrastructure as code. Sometimes Crossplane is talked about in terms of infrastructure as data. Why can't things just be infrastructure?
DANIEL MANGUM: I think a lot of the terminology is confusing. But in this case, there are very specific differences. A big one to me, because this is something that I've kind of worked on in the Crossplane ecosystem, is the level of abstraction that someone is operating at and the level of permission that you give them. So if you have something like Terraform or Pulumi, you've written scripts or modules and they get invoked by a user at a certain time. And that user has to have the permissions for those actions to take place. So if you're running a terraform apply, you need to have credentials that give Terraform the ability to take all those operations on your behalf on the cloud provider.
So you've got your abstraction permissioning level at different levels there. In Crossplane, everything being represented as a Kubernetes object is managed with Kubernetes RBAC. So you go from a user-based authentication model with these cloud providers to an object-based authentication model. So you as an infrastructure operator, you take these granular resources-- so maybe something like a Cloud SQL instance on GCP-- and you'd create abstractions on top of that. And maybe you present that to the developer as a Postgres instance, which is a little more abstract and maybe has some default set and only allows some fields to be configured.
And then you give the operator the ability to take those actions. And the developer, if they're given the ability to create a Postgres instance, you already know that the operator can do that on their behalf. But the permissioning level is at the same level of abstraction that the developer is interacting with.
CRAIG BOX: Some people with experience in the Cloud Foundry ecosystem might be listening to this and saying, hmm, that sounds a bit like the service broker. And the service broker ideal is something that Google and the Kubernetes community kind of had a play with, and then decided people were much happier working with Kubernetes objects like you mentioned, rather than dealing with at least the Open Service Broker implementation. How would you describe the differences between those two implementations? And what has Crossplane learned from service brokers as a concept?
DANIEL MANGUM: I am not super familiar with the Open Service Broker and it's final implementation and how it worked in practice. But I think you touched on the main point here, is going for a full Kubernetes-native approach. Both the granular manage resources, we call them, which are the custom Kubernetes resources that represent, one to one, the cloud provider APIs-- so as I mentioned before, a Cloud SQL instance or a GKE cluster or something like that-- as well as the mechanisms to compose those in the higher-level abstractions are also Kubernetes resources. So that enables you to standardize on the Kubernetes API, which also enables integration with a lot of other projects.
So one of the primary ways that we've seen this manifest and be beneficial to consumers is the ability to use getopts to go through all of this. So you're creating your workloads, your pods, your deployments with your getopts pipelines. You can also create your database alongside that. You can also create the abstractions for the platform that you're actually servicing up to folks to then put into their getopts pipeline.
So really standardizing on the Kubernetes API is beneficial from that perspective. And also integrating with products like OPA or any sort of policy, you automatically get the ability to say things like, you're not allowed to create a Cloud SQL instance larger than 20 gigs or something like that. And that just works out of the box because we're all using the Kubernetes API.
CRAIG BOX: We used to have multicloud tools like RightScale or Scaler, which would let you say create an instance. And then if you were backed by Google Cloud, it would create a Google instance. If you were backed by some of the cloud, it would create one there. That model locked you into a lowest-common-denominator approach, where you would say, create an instance. It can be this size and shape. But effectively you could only describe the things that you wanted do on all of the vendors that you wanted to support.
You have been talking about the idea of having a type, which is, for example, a Postgres instance. And that might be backed by Cloud SQL on Google or RDS on Amazon, for example. How do you make sure that you don't fall into that lowest-common-denominator problem?
DANIEL MANGUM: Well, a huge benefit is that the end user-- in this case, infrastructure teams within an organization-- get to define both that abstraction layer and the translation layer. So you could have a field in the spec of your Postgres instance that mapped to the instance type on GCP Cloud SQL and the instance type on RDS on AWS. And those are obviously different terms that are used for the different instance types of compute that you can use there. And we have facilities in that translation layer which we call composition that allows for things like mapping to different values.
So you may have, in the spec of your RDS instance, a Size field, which is an enum which is small, medium, and large. On GCP, that's going to map to certain instance families, and on AWS, it's going to map to others. But in that composition layer, as you're defining those abstractions and the granular managed resources that satisfy them, you can actually perform that translation, which avoids lock-in. And you can also do things like have multiple compositions for the same cloud provider.
So it's nice to talk about this multi-cloud scenario. Because it is the ideal that we see for an organization where they can just transparently switch out the cloud provider from underneath. You could just be switching out services on the same cloud provider or a different configuration of those services on the same provider.
CRAIG BOX: The thing that the end user is creating is a Kubernetes objects defined by a CRD. These translations that you mentioned that take the CRD and map them back to the cloud provider's thing, how would are defined?
DANIEL MANGUM: They are also defined as Kubernetes objects actually. There's a few different layers here, the first being the managed resources. So these are the CRDs that, when you install a Crossplane provider into your Crossplane Kubernetes cluster, these get added. So when you install provider GCP, you get a whole slew of CRDs that come to the cluster that represent all the different GCP API types. The next layer of that would be composition, which is saying this is a bundle of managed resources that have some relation together. And they have the mappings between that from an abstract type to those concrete types. And then the XRD, which maps, as you would guess, to a CRD-- and we call that a Composite Resource Definition--
CRAIG BOX: I was hoping I was an Extreme Resource Definition.
DANIEL MANGUM: Well, it's up to interpretation for sure. But it defines, actually, that abstraction layer. And if you look at one of these in practice, the YAML is going to look extremely similar to a CRD. It has an open API V3 schema in there.
The difference is, when you create an XRD, Crossplane's controllers actually respond by creating a cluster-scoped CRD instance of that. Our frequent naming convention is calling that a composite PostgreSQL instance. And then you can optionally offer a namespaced variant. So creating this XRD could result in the creation of a composite PostgreSQL instance CRD and a PostgreSQL instance namespaced CRD.
What that allows is for either of those resources to map to different compositions that are once again bundles of these managed resources that satisfy the abstraction. And so when you do that, you can have namespace isolation for different teams, where you can say team one in your team one namespace has the ability to create PostgreSQL instances. And those map to these various different compositions that we've defined-- maybe GCP, maybe AWS. And they just have that single interface.
So all of our managed resources, which this is a somewhat controversial decision-- and you'll see differences when compared to other similar projects in the Kubernetes space-- are cluster-scoped. We think of those as cluster-scoped because it's similar to something like Kubernetes nodes, where it's infrastructure that the infrastructure admin or the cluster admin is defining are available in that cluster. And then the abstractions are what developers are going to self-service on in the namespace scope.
CRAIG BOX: You mentioned there are other projects in the space. Returning back to our inevitable comparison section, we have the Google Config Connector, which lets you define spanner instances and BigQuery databases like you would in a CRD. I believe Amazon has a similar kind of thing, as you do as well. Is there a need to have those individual things now that there's something like Crossplane? Or how do you work with those teams?
DANIEL MANGUM: There's a variety of different engagements we've had with those teams. And really what it boils down to is the products are trying to solve different use cases.
So exactly as you mentioned, Config Connector, which is GCP's project in this space, is very similar to the Crossplane provider GCP in that it models these resources at a granular level. It does create those at the namespace. So the idea there is, in different namespaces in your cluster, you can interact with the GCP API via Kubernetes resources. It doesn't strive to create these abstraction layers or present this kind of developer self-service sort of method. So if you want to create a Cloud SQL instance via Config Connector, you actually are tangibly creating a Cloud SQL instance as opposed to some sort of friendly abstraction.
So I'd say the big difference between those two-- and I once again want to point to a blog post that came out recently that actually forms a comparison between Crossplane and some of these projects-- the big difference is Crossplane is for building platforms while those other products are primarily for provisioning infrastructure.
CRAIG BOX: Something that you often bring up when you're talking about Crossplane and how it's used to build platforms is that it provides a Heroku-like experience. When I think of Heroku, I think perhaps more something like Knative, which is very much just push source, get infrastructure.
What exactly is Heroku these days? And which of us is more right?
DANIEL MANGUM: I think that maybe Knative and Crossplane would be used together to do something that looks exactly like Heroku. As I mentioned before with that marketing copy you kind of called out, I would say that I would shy away a little bit more from the application side of things in terms of actually running the applications.
So Knative gives you a really friendly interface. And this is of manifested in a service like Cloud Run on GCP. It's a really friendly kind of serverless-like interface you get for deploying your applications. Crossplane, once again, using the Kubernetes API, is a really friendly way for you to get the infrastructure that that workload that you provision with Knative consumes.
So we actually have a Livestream episode with Matt Moore from the Knative team where we talk about using Knative to deploy your application that consumes infrastructure provision by Crossplane. And that's a really nice experience because they have similar output and input interfaces in the form of Kubernetes secrets or other external secrets stores. So I'd say using those two projects together would be a really powerful way to create your really true Heroku-type platform.
CRAIG BOX: When I run Crossplane in a cluster and then ask it to create a Postgres database, it can create objects like secrets, for example, that allow me to communicate with that database. How do I think about Crossplane if I have something that I create but then I want that to be available to multiple clusters?
DANIEL MANGUM: So you're getting at a really important point of two different deployment models of Crossplane. They depend on both your organization structure and your adoption maturity. So when folks typically use Crossplane to get started, they just either install it into their existing cluster or they spin up a local cluster and install it there. And they provision and consume infrastructure in the same cluster. And that is a nice experience if you have a limited scope or a small business that is just running a number of applications.
Frequently, folks are moving more towards using a multiple-cluster approach to managing different teams within an organization. And so when we see folks in those kind of settings, we typically see a dedicated Crossplane control plane, an infrastructure control plane for their organization, that they use to provision their infrastructure. Some of that infrastructure may be the actual app clusters where workloads are running.
So I gave an example earlier of using GKE as something that Crossplane could provision. So we frequently see folks have a control cluster. And when a new team wants a cluster, they come to Crossplane and say, could you please spin up a GKE cluster, an EKS cluster, or something like that. And Crossplane says yes, gives you back the credentials, and then you communicate and deploy your application there. If you want an RDS instance, you could have that same experience. Ask Crossplane for it, and then ask Crossplane to put the credential information in that cluster that was provisioned. And that can happen via a variety of ways. But one of the things we haven't touched on here is there's a big focus on the different cloud providers for Crossplane providers that provision infrastructure. Any API is targetable here.
So a big use case that we see is provider Helm, which actually allows you to define, as a custom resource, a Helm chart. So you could compose that with something like a GKE cluster. We have a lot of organizations that actually spin up a GKE cluster with things like Prometheus or Linkerd installed in that cluster. But it's abstracted into a single interface that's something like a cluster with services. So you can basically create arbitrary providers that allow you to manage different things within infrastructure that you're spinning up as well.
CRAIG BOX: Does that elevate you to the level of continuous delivery?
DANIEL MANGUM: It depends on what continuous delivery means in your organization, but absolutely. Anything that uses getopts and allows you that constant update and workflow is something that's going to work well with Kubernetes. And so bringing all of that into a single API definitely facilitates that kind of workflow.
CRAIG BOX: Crossplane was donated to the CNCF July of last year. What was the decision-making process behind that move?
DANIEL MANGUM: As we talked about earlier, you know, the founders of Upbound and the founders of Crossplane already have a lot of experience with the CNCF, including taking Rook to actually graduate from the CNCF, which is some rarefied air with the projects that have done that. It was definitely a very seamless process in that regard. And we're still going through it.
We're currently a sandbox project, quickly moving towards incubation. And a big component of that is talking to, actually, end-user adopters and seeing how they're using Crossplane. Which, as someone who builds Crossplane, that's a super rewarding experience for me, someone taking the work and seeing how they're putting it into practice.
So I'd say it's been a great experience. One of the things I specifically want to call out as super useful for being part of the CNCF it's some of the mentorship opportunities it provides. So Crossplane, for the spring cycle, is participating in the LFX mentorship program. We actually had a meeting with our new mentee this week. And that's a wonderful opportunity to get folks involved in open source.
And also, from a maintainer perspective, that'll give you the opportunity to have more hands on deck and also the ability to mentor someone and grow your own leadership and management skills. So there's countless benefits to being part of the CNCF, but that's a big one that I've seen lately.
CRAIG BOX: That was about, I want to say, 18 months after the project was founded. Why that length of time versus shorter or longer?
DANIEL MANGUM: I would say there's two main components to that, one of them being that, when you are looking at targeting so many different providers, there's a lot of community development that needs to happen. A good example, once again, would be the Terraform community, which Terraform is still not a 1.0 project. And it has been around for many years. And a lot of that has been community development. So there's these different providers. There's lots of different APIs and countless more being added every day. So just growing in coverage is a big aspect of getting a project to a point where it's super useful.
And the other part of that is just exploring how organizations currently are structured and how they desire to be structured in the future and building a system that facilitates that. So a big phrase that we use in Crossplane-- and you'll see it used across the cloud native landscape-- is separation of concern. And really understanding what the different personas are within an organization, how they want to interact with each other, and what their roles should be, was vitally important for establishing a strong foundation for Crossplane to be reliable and production-ready for the future. And that was eventually manifested both in the contribution to the CNCF and in the 1.0 version that we released at the end of last calendar year.
CRAIG BOX: That was the next thing that I wanted to ask about-- what things needed to be done before you were willing to call the project 1.0? And given, for example, as you say, Terraform hasn't hit that number in many years but still has that adoption, was that a big, important thing for you to do?
DANIEL MANGUM: I think it was really important. Because end users have to have a lot of confidence in something that's managing their infrastructure. You have to have the guarantee that it's not going to go delete your production database. And one of the benefits that we get versus something like Terraform is we take advantage of the incredible work that has already been done on Kubernetes in that we're not managing state, we're not managing eventual consistency, we're not managing API rate limiting and that sort of thing for most regards. There is some provider rate-limiting that we do. But a lot of those benefits of a robust, production-ready distributed system, we got right out of the box, which definitely accelerated the ability to get to 1.0.
One important caveat here is the core Crossplane controllers are 1.0, which are all the things that facilitate packaging up these abstractions and delivering them and composing them and that sort of thing. Providers are being developed all the time. But many of those are not 1.0, so they don't have full coverage. But getting that core Crossplane to 1.0 also facilitated further community development because those APIs that were being targeted by the providers were guaranteed to be stable at that point.
CRAIG BOX: Now Crossplane 1.1 has just recently been released. What's new in 1.1?
DANIEL MANGUM: There's a couple of different things. So I mentioned a little bit about some of the transformations you can perform in the composition layer of Crossplane. So a big one that was contributed by a new community member was what we call bidirectional patching.
So we've talked a lot about taking the fields on the abstract resource, the PostgreSQL instance, and mapping those to a Cloud SQL instance. Obviously the different underlying resources in that composition, like the Cloud SQL sequencing, expose different things when they're created. And some of that information, you want exposed back at the abstraction layer.
So for instance, the status of the resource, different fields that it exposes in terms of how it's running in production, sometimes you want some of those patched back onto the abstraction. So that was a big improvement that was added. And there are a couple of others around composition.
Another big thing that we iterated on was the provider credential method. So I talked a little bit earlier about object-based versus user-based authentication. So in Crossplane, the operators are responsible for authenticating to the providers. And those Kubernetes operators need to have appropriate credentials to do that. Frequently, we would use Kubernetes secrets to provide those credentials.
However there are a lot of different issues with using Kubernetes secrets in different settings. Sometimes those can be alleviated by just encrypting them at rest and etcd. But sometimes folks have other solutions they like to use.
So a big integration that we had in this release was integrating with the Vault sidecar. So you can actually store your AWS keys in Vault or something like that. And then, when you provision your infrastructure using Crossplane, you can just say, hey, use this key from Vault. And it will know how to actually integrate with that and go and fetch that from the appropriate store.
CRAIG BOX: Two questions to round out the Crossplane discussion. The logo is an ice block or a popsicle with three different colors. Is there a canonical flavor for those colors?
DANIEL MANGUM: I am actually colorblind. So I am not the person to answer that question.
CRAIG BOX: Were you at least aware that there are three different colors?
DANIEL MANGUM: I was aware that there were different shades of gray present on that popsicle. But I do know that the popsicle is rather popular. But I do not have any guesses as to what the actual flavors in there are.
CRAIG BOX: And why is Crossplane spelt with a C and not with a K?
DANIEL MANGUM: I think because we just didn't jump on the bandwagon fast enough, I guess. But at this point, there's been so much overuse of the K in different words in then project names, that I'm quite pleased that we stuck with something that was the actual spelling of the word we were trying to emulate there.
CRAIG BOX: I both appreciate that and want to see that as your next April Fool's Day blog post.
DANIEL MANGUM: [CHUCKLES] That's a good one. Thanks for the marketing ideas.
CRAIG BOX: Now, other things that you do, some as part of your day job and some, obviously, as hobbies, you are a tech lead for the Kubernetes SIG-Release team. How did that come about?
DANIEL MANGUM: So when I first started at Upbound, which was after I worked on Crossplane in the open-source environment while I was in school for a time, one of the first things I did was get involved upstream by joining the SIG-Release shadow program. So Kubernetes releases every three to four months. And during that process, there are a number of roles that need to be filled to make sure that a release goes smoothly. And I joined the CI signal role, so the Continuous Integration signal. And that's basically watching all the different tests of Kubernetes and the different components, making sure they're staying healthy. When they get unhealthy, either fixing them or finding the appropriate person to do so.
And I joined as a shadow, did that for a couple of cycles, eventually became a lead, and then moved into working with the release lead. And at that point, I'd gotten really involved in building some of the tooling around releasing Kubernetes, which involves building, packaging, actually publishing the artifacts and that sort of thing.
And I was asked to join as a tech lead for the actual SIG, and since then have had the opportunity to help grow some other folks through the shadow program. And if folks are interested in being part of that shadow program, in my opinion, it is the easiest and smoothest way to join the upstream Kubernetes community. So I'd highly recommend that to folks.
CRAIG BOX: We are always very happy to speak to the release lead when each release comes out. Have you ever considered leading a release yourself?
DANIEL MANGUM: I have not. Because the last release, I was a release shadow. And right after that, I became a tech lead for SIG-Release, at which point you're not really eligible. I guess you could be eligible, but you typically would not actually be there release lead for an individual release since your responsibilities span over all the releases.
So I haven't considered it, but I did serve in a shadow role and definitely got to learn. Jeremy Rickard was the release for 1.20 release lead. And definitely learned a lot from that experience. And I'm excited to see what Nabarun talks about when he comes on the show in a few weeks.
CRAIG BOX: You have a particular use case with Crossplane where you don't actually really use much of the container magic in Kubernetes. You're using it as a control plane, using its API service and so on.
Do you see that as being something that other people are going to pick up on over time? And do you think that will change the way that Kubernetes is built or released?
DANIEL MANGUM: I do think so, particularly around the API server. And I'd love to say this is my idea, but this is actually something that's been in talks since the creation of Kubernetes from folks like Brian Grant and Joe Beda and others, where they've talked about the real value is the API rather than the container runtime implementation. So I definitely think that that will affect how Kubernetes releases happen in the future. And I also think you'll start to see more and more distribution, which there's many already of Kubernetes, that are just packaging things like control planes.
CRAIG BOX: You maintain a website of documentation for CRDs in the things in the Kubernetes community. A lot of people would measure the health of a project by the numbers CRDs installed. Why is that a thing people do?
DANIEL MANGUM: I hope that that is certainly not true. Well, maybe they're saying that the lack of CRDs would indicate the health, which I think is probably a more appropriate measure, which is a little bit confusing coming from Crossplane, where we install hundreds of CRDs into your cluster.
But there has been a wide, wide proliferation of the usage of CRDs because it is such a strong model. And understanding what's actually being installed in your cluster is a lot of times pretty opaque. You home-install, and you end up with a ton of different types, and you don't know what they do.
So the motivation for doc.crds.dev was originally that we were building lots of projects that had many CRDs. And we had some manual tooling to generate some markdown documentation and put those on a website. That was useful for a time. But as folks started to develop more and more Crossplane providers, we didn't want to regenerate our documentation every time someone did that.
So doc.crds.dev actually just crawls Git repos. And when you request a specific version, it'll go and index that, basically finds all the CRDs, generates some documentation, and will serve those from that point on for you. So that gives us a way to dynamically create documentation and serve it for folks whenever new releases of any products happen. And obviously that's not scoped to just Crossplane, that's any project that installs CRDs.
CRAIG BOX: Last week, it was announced that you have had a talk accepted at KubeCon virtual EU. So congratulations.
DANIEL MANGUM: Thank you. I am very honored.
CRAIG BOX: That talk is about FPGAs, which you mentioned an interest in upfront. What will you be talking about?
DANIEL MANGUM: I'm specifically going to look at using FPGAs in a more consumer setting. Frequently, for folks that aren't familiar with FPGAs, it's Field-Programmable Gate Arrays. So this is basically reconfigurable hardware is probably the easiest way to describe it. So you can write your hardware definition in an HDL hardware definition language like Verilog or VHDL, or there's lots of other higher-level constructs at this point. And then you flash that onto an FPGA, and then it essentially becomes that hardware target.
So a big motivation for this recently is the RISC-V ISA, the Instruction Set Architecture, which has become very popular, and using FPGAs to do that, since there is not a lot of RISC-V hardware available. But frequently, folks that are using FPGAs are very large organizations with very latency-sensitive applications where they may, in the past, have designed their own ASICs, or Application Specific Integrated Circuits, that take the optimizations down to the hardware level.
In my opinion, as someone who doesn't use FPGAs in my day job but is fascinated by the technology, I'd like to make those more accessible to folks from a consumer setting. So there's lots of $100-or-less FPGAs in development boards that you can buy yourself. But understanding how to actually use those in a productive way is definitely very opaque.
And once again, talking about the power of the Kubernetes API, one of the things that I want to do in that talk and with some other kind of projects I'm working on outside of my day job, is make the Kubernetes API an interface for programming and using FPGAs. And I think there's a lot of interesting use cases. And hopefully folks who work in the same space as me who aren't necessarily exposed to hardware are going to have the ability to take advantage of this really great technology without having to understand every single thing that's happening behind the scenes.
CRAIG BOX: Is there anything in the cloud-native ecosystem or Kubernetes itself that you think could benefit from being hardware accelerated?
DANIEL MANGUM: There's lots of jokes around accelerating the YAML processing of Kubernetes with dedicated hardware, which I don't think that's really a bottleneck that we need to solve at this point.
CRAIG BOX: Did you see that someone optimized a Grand Theft Auto game last week where they had a JSON blob that was loaded that took 10 minutes to load. And they figured out that, if they changed the implementation of sscanf or something, they could do it in one minute instead.
DANIEL MANGUM: Absolutely. I did see that. That was making the rounds on Hacker News.
But in terms of application-specific hardware, a lot of things that folks might use it for are things like video processing or anything that is algorithmically predictable. So video processing is, I guess, the only one that comes to mind right now. But parsing and some other things like that are obviously useful as well. And those are things that organizations that are not huge organizations don't do right now, they typically will offload that to another service or something like that.
So enabling some of those use cases, as well as just letting folks explore. Maybe one of the reasons why we don't use this that much right now is because the interface is really inaccessible. So making that more available to folks and seeing what folks come up with when they have that kind of access.
CRAIG BOX: Your KubeCon talk page has a little profile picture of you in the corner with the lovely, clean-shaven short-haired look. And I'm going to have to guess that picture was taken quite some time ago.
DANIEL MANGUM: It was. I was going to get married right around when the coronavirus pandemic broke out here in the US. And we ended up delaying our wedding, and then eventually getting married in a very small ceremony, at which case I had to get a very COVID-safe haircut at that point, which is around the time that that picture was taken.
Since I actually got married, I have not cut my hair at all, which is why you're having the pleasure of getting to see me right now. Which, unfortunately, the listeners will not have that privilege.
But yes, I do look quite different from the picture there. But I'm trying to embrace the COVID haircut as long as possible.
CRAIG BOX: I wondered if it was correlated with you becoming a YouTube star.
DANIEL MANGUM: I don't know if there's correlation or causation there. But I think that I might have become someone to follow on YouTube despite my appearance at this point.
CRAIG BOX: You host two shows on YouTube. One is called "The Binding Status," which is a Crossplane thing. And then you also host a "Flake Finder Fridays," which has just launched recently, which is more of a Kubernetes community thing. Tell me about those two shows.
DANIEL MANGUM: "The Binding Status," or TBS as we frequently call it, is Crossplane-specific. The main purpose there is to bring on folks from other cloud-native products and show the power of integrating on the Kubernetes API. And it's quite astounding the list of folks that we've had on that show and how easily their projects are able to integrate. Anything that needs to consume infrastructure, Crossplane is a great fit for doing that.
And then the "Flake Finder" show is brand new. We've actually only had one episode. And we will have one coming out the week, I believe, this show will be announced. And that's where myself and Rob Kielty, who is a fellow contributor to Crossplane, go and look at parts of Kubernetes that have broken over the past month, and show how it was discovered that they were broken, how they are fixed, and that sort of thing. So that comes out a lot of experience that we've had in that CI Signal role that I was mentioning earlier.
The goal there is that folks get a little more insight into the actual Kubernetes codebase as well as Kubernetes testing and how you spin up infrastructure to test a massively distributed system like Kubernetes.
CRAIG BOX: Finally, you've been to every state in the US. Which one had the best cheesesteak?
DANIEL MANGUM: I'm not a big cheesesteak eater. But I think that I would have to say that Philadelphia was the best place that I've had a cheesesteak. I think that's a requirement.
I have been to every state. I have not traveled, obviously, in the past year or so. And I actually haven't traveled that much just recently in general.
When I was growing up, my father was a small-business owner, and so had the ability to close down shop for two weeks every summer. And we'd get in a GMC Yukon XL, kind of a big SUV, which is actually the car I drive today still. His goal was basically to take my family to all 50 states and see the national parks in each of them. So that was definitely a unique experience I have, and might be the only icebreaker type of fact that's useful to share about me.
CRAIG BOX: Well, a noble goal, and a fun experience, I'm sure. Thank you very much for joining us today, Dan.
DANIEL MANGUM: Absolutely. Thanks for having me.
CRAIG BOX: You can find Daniel on Twitter @hasheddan, or on the web at danielmangum.com. Now, is that hashed like an algorithm or hashed like corned beef?
DANIEL MANGUM: Hashed like an algorithm. [CHUCKLES]
CRAIG BOX: Ken, thank you so much for helping us out with the show today.
KEN MASSADA: This was fun. Like, I feel at home. [CHUCKLES]
CRAIG BOX: Glad to have you back. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter, @KubernetesPod or reach us by email at firstname.lastname@example.org.
KEN MASSADA: You can also check out the website at kubernetespodcast.com, where you will find transcripts and show notes as well as links to subscribe.
CRAIG BOX: You can find Ken's podcast, "Things We Don't Say," at twdspodcast.com. I'll be back with another guest host next week. So until then, thanks for listening.