#23 October 3, 2018
Andrew Phillips (PM) and Lars Wander (Software Engineer) from Google Cloud talk to Adam and Craig about the difference between CI and CD, and how to apply these processes to your release and rollout processes.
Do you have something cool to share? Some questions? Let us know:
ADAM GLICK: Hi, and welcome to the Kubernetes podcast from Google. I'm Adam Glick.
CRAIG BOX: And I'm Craig Box.
[MUSIC PLAYING]
ADAM GLICK: Hey, Craig. How's it going?
CRAIG BOX: Good, Adam. Have you seen any good movies lately?
ADAM GLICK: Funny you should ask. Actually, I finally got a chance to see "Scott Pilgrim vs. The World," which I realize I'm coming late to this one. But it was surprisingly good for those of us that love geek culture. I found it very much in the vein of "Ready Player One." So if you saw that, and you enjoyed that, "Scott Pilgrim" definitely worth picking up on Netflix for a viewing.
CRAIG BOX: Can you give us a spoiler and tell us who wins?
ADAM GLICK: I won't tell you, but it is listed in the title.
CRAIG BOX: Well, that's who's competing. Who wins?
ADAM GLICK: Ah, you'd have to watch the movie to find out. I've narrowed it down for you. I mean, it's either Scott, or it's the world.
CRAIG BOX: Right. I did watch "Ready Player One." I think that it was a vast improvement over the book. The book was just a little laundry list of things that the author liked about the '80s. So, good on Steven Spielberg for turning--
ADAM GLICK: Oh my gosh. Heresy. You liked the movie better than the book?
CRAIG BOX: I did.
ADAM GLICK: Oh my god. This is like a frame-it moment. People always say the opposite.
CRAIG BOX: No. There are a few occasions. It's not Craig and Adam's movie podcast, so I won't dig too deeply into them.
ADAM GLICK: Fair enough.
CRAIG BOX: Thank you to everyone who came up to me after my talk at the Cloud Summit in Sydney. It is so gratifying. I feel a lot of times Adam and I just sit here and speak to each other once a week and occasionally bring on a friend. But when people actually come up and say, yes, we like this-- Kenneth, if you're listening, we had a lot of people liked your support episode, so good stuff. Gave away a whole bunch of stickers.
I must say, and again, this is something that doesn't happen every day. I'm in New Zealand this week and went around to my parents' place for dinner. And my dad asked me for a sticker. So that implies that he knows that stickers are a thing, and that he treats them as valuable pieces of currency. So, hi, Mum and Dad. I'm sure you'll hear this one as well.
ADAM GLICK: Excellent. Will he put it on his laptop? Or is it going to go on a desktop under the desk somewhere?
CRAIG BOX: I'm not sure. I have a couple in the box waiting for him, which he'll probably get after this episode is released. So it'll build up the anticipation. But I've never really seen him put stickers anywhere, so they'll probably go in a drawer.
ADAM GLICK: Gotcha.
CRAIG BOX: Actually, no. Come on. It's Mum and Dad. They'll go on the fridge. That's where all things go.
ADAM GLICK: I was about to say, why don't you just make a fridge magnet, just like short circuit.
CRAIG BOX: Well, don't-- you're trying to give away too many pieces of swag. We'll get to fridge magnets next.
ADAM GLICK: Let's get to the news.
[MUSIC PLAYING]
CRAIG BOX: Kubernetes 1.12 has been released. Headline features include bringing TLS bootstrapping of nodes to general availability and support for VM scale sets, Azure's version of Managed Instance Groups. A number of companies have posted blogs summarizing what's new, but my favorite is Google's blog post, mostly because of how handsome the author is.
ADAM GLICK: You mean, you're the author.
CRAIG BOX: Maybe. Either way, you can find it in the show notes.
ADAM GLICK: The CNCF has released the schedule for KubeCon in Seattle this December. They expect this to be the biggest Kubernetes event yet, with 7,000 attendees. Co-located events include vendor trainings and community days, like the Kubernetes Contributor Summit and EnvoyCon.
CRAIG BOX: Rook, a file, block, and object storage platform that joined the CNCF Sandbox eight months ago has now moved into the incubator phase. Rook is a cloud-native orchestrator for the Ceph, with a C, storage system. It is currently at version 0.8 and has grown substantially with 13 times the number of container downloads and double the number of GitHub stars and contributors since entering the sandbox.
ADAM GLICK: Another Google Summer of Code project was highlighted on the CNCF blog this week, telling the story of Anirudh Murali from Anna University in India. Anirudh used OSS-Fuzz, an open-source fuzzing utility built by Google, to fix a number of issues found with the Envoy proxy, best known as the data plane of service meshes like Istio.
For anyone whose memory is a little fuzzy about fuzzing, it's a way to test for bugs by having tools throw lots of different kinds of input at a program to see how it reacts. The calls can be guided or random, and the results are logged. Fuzzing can find both crashing and security bugs that unit tests alone would not catch.
CRAIG BOX: Finally, Azure hosted their Ignite conference last week. Microsoft touted Kubernetes support is the number one networking feature of the upcoming Windows Server 2019, and hopes to have Kubernetes on Windows generally available in the next Kubernetes release.
SQL Server 2019, which was also announced at the event, will be able to run its new big data analytics services on Kubernetes. They also announced that the Container Registry now has support for Helm Charts, and that they are adding OCI image formats to their registry. Both of these features are in preview.
ADAM GLICK: And that's the news.
[MUSIC PLAYING]
Our guests today are Andrew Phillips and Lars Wander, who are product manager and software engineer at Google Cloud, respectively. Welcome, Andrew.
ANDREW PHILLIPS: Hi, everyone.
ADAM GLICK: Welcome, Lars.
LARS WANDER: Hey.
ADAM GLICK: Why don't we start out at basic definitional level. How would you define CI and CD?
ANDREW PHILLIPS: I think the main thing we're looking for here is the concept of, you have a bunch of code in some repository, typically. And you have a runtime, production, and a bunch of others as well. And, in general, there's this huge hole in between the two. And you have to fill that hole somehow, ideally with a process that allows you to safely and reliably not only get feedback back to developers, but also eventually get the code out to production.
Often people talk about this as kind of one single linear pipeline, if you like. Pipeline is a term that's commonly used here. But I think it makes more sense to think of it as two separate but linked-- asynchronously linked, if you like-- processes-- one about, what do I need to do every time I commit a code change or make a pull request or something like that? And then the other, sometimes, and maybe every time in an ideal setup, take a release candidate and actually ship it out to production.
So CI/CD-- the terms are continuous integration and either continuous delivery or continuous deployment. And we could talk for hours about the exact difference between those. But I think more generally, it makes more sense to think about this as this is the software delivery part of your process, where you're taking code changes, getting feedback on them, and at a cadence that makes sense for you, actually taking them and rolling them out to production.
LARS WANDER: Yeah. And this is often thought about in terms of delivering changes to cloud web services. But this applies to things like Android APK delivery, making changes to IoT device drivers. It's really like Andrew said, having some source change that you want to push, and then deliver it to some production environment, wherever that is.
CRAIG BOX: It feels that you could group continuous integration with the process of writing code and group deliveries in an entirely separate process. Why do you think people group CI and CD together?
ANDREW PHILLIPS: I think, from my perspective, one thing we see here is that as people are reading lots of blog posts about DevOps or trying to adopt that, or following on from agile implementations or whatever, a lot of the terminology and a lot of the material they'll come across talks about these things as this magical single pipeline where you can achieve incredible efficiency by making sure every commit goes out to production.
And I think that's a useful mental model as a goal, in the sense that what it forces you to ask yourself is, what are the steps that I normally carry out in this process? And where are the bottlenecks? And where's the slowness?
And typically, what most people end up discovering is that if you automate a lot of the steps that might be done manually today, you can make your process much more reliable, much more efficient. And you can give yourself the capability to run it more often.
But I think as you were sort of hinting at, it is a bit of a fallacy to think that you have to do it every single time. I think if we look at Google internally, and certainly, if we look at many, many other companies out there as well, the reality is that they get feedback to the developer, hopefully with every single commit, and ideally as quickly as possible. But that's definitely not the cadence with which they deploy every application to production.
And so, again, while I think it's useful to think of it as this kind of conceptual tube, if you like, from code to production, the reality is that it's more useful to ask yourself two separate questions. How can we optimize the developer feedback cycle? And then, how can we optimize the application rollout cycle? And what cadence, what relationship between the two makes a lot of sense?
LARS WANDER: In the best case scenario, you really could have every single commit go immediately into production. But obviously, this takes some time. Say, it takes 10 minutes. And if you're at a scale where you have developers committing to your repository 10 times in the time that it takes actually roll out a change, you start to run into certain logistical problems that make this infeasible.
And this is why on one hand, it's good that these two processes are linked because some handoff has to happen between the continuous integration and the continuous deployment. But at the same time, it's often not totally feasible.
ANDREW PHILLIPS: And just to add to what Lars was saying, there's not just the question of technical feasibility from the process side. There's also the question of what actually works best for your users or customers.
If you think about, say, delivery to a mobile app store, for example. There's a cadence there that is enforced not just by the technology, but also by things like manual review, for example. Or indeed, what your users are willing to stomach in terms of the speed and frequency of update notifications they get.
Imagine you are working on the system, and it pops up an update notification every 25 seconds or something like that. That would drive you crazy. So, again, having the capability and setting it as a goal is useful. But the reality is that you should find the balance that works for yourself and your team.
And I think another useful thing about thinking about these in a somewhat decoupled manner is that it frees you from this idea of thinking you have to have one tool to solve it all. And I think that's, again, where this mental model of it's a single pipeline, so it kind of mentally must run through a single automation technology often gets people a little bit stuck. If you think of them as two somewhat separate things, that also frees you with looking at potential tooling implementations, and then picking what's best for the individual subsections.
ADAM GLICK: CI/CD pipelines can be used for all sorts of code creation and deployment. But we often hear about them in the context of Kubernetes. Do you have a sense for why we so often hear about CI/CD pipelines coupled together with Kubernetes?
ANDREW PHILLIPS: Well, I think one of the reasons is that Kubernetes is what lots of teams are looking at now. Software delivery or a CI/CD is typically one of those areas where people, if you ask them about how it's going for them today, the sort of default answer is, well, it could be better. And then the follow-up is, yeah, but we'll fix it next time. Because it's the kind of area that you sort of-- you get running, and then you hope you don't have to look at it again until it breaks.
And Kubernetes, for a lot of organizations, is that next time. It's a new platform they're adopting. And that gives them the opportunity to do things better, if you like, this time around.
And so, as they start playing around with Kubernetes and figuring out how to run it, an immediately adjacent question of course is, OK, well, now we have a Kubernetes cluster or GKE cluster or whatever. Now, we want to actually get applications running in there. So it's natural for them to try to figure out, OK, so what's the software delivery process look like?
And the reason I think they ask a lot is, OK, so how do we do this the right way? Nobody wants to make the same mistakes as last time, kind of patch it up and just about get it working. They're looking for guidance and best practices about what's the right way to do this because, of course, Kubernetes does have some quite strong opinions about the right way to represent things.
And one thing that is quite specific about Kubernetes is that it brings the notion of configuration very much to the fore, through the fact that you are explicitly defining manifests for lots of things. And even though managing configuration in your software delivery pipeline is not a new thing, in the Kubernetes world they're very much first-class artifacts, which hopefully gives us a very good chance this time around to really tackle that problem in a much better way than we've been able to do so before.
LARS WANDER: Yeah, absolutely. It also in some sense challenges some fundamental assumptions you might make about building deployment tooling, because Kubernetes has built-in mechanisms for handling orchestration of your rollout. So it has the deployment object that says, when you submit a change to it, I'll make sure that over some period of time that this change is actually rolled out for you. And that complexity is lifted from the developers and the tooling that they may have to build to maintain and handled entirely by Kubernetes.
CRAIG BOX: But there are a lot of use cases where the eventual application isn't going to be deployed in a Kubernetes environment. You mentioned mobile applications before. Do you see a lot of people who are using Kubernetes just for doing the build and CI part of things?
ANDREW PHILLIPS: Certainly one of the things that we see people ask about whenever they're looking for the good way or the right way to do this with Kubernetes is if you're thinking from a perspective of, ideally, since I've now got this new runtime platform, couldn't I make sure all my supporting tooling also lives in this runtime platform?
We certainly see people saying, well, you know, maybe my entire software delivery pipeline can just live in the Kubernetes cluster. I think to be fair, by and large, we still see that question arising most often in the context of the applications actually running to Kubernetes or running on top of Kubernetes, rather.
But, of course, there's no inherent reason why once you have good software delivery tooling there, you shouldn't end up shipping it wherever you need it to go. And certainly, there are a bunch of tools that run well on top of Kubernetes and support delivery to Kubernetes very well, but also can deploy arbitrary applications to lots of other platforms. So it's definitely a technical possibility.
CRAIG BOX: If I'm someone who's already got an investment in a particular set of tooling, should I be looking at taking that tooling forward and investigating its Kubernetes support? Or should I be looking at brand new tooling that's built in the Kubernetes native fashion?
ANDREW PHILLIPS: Ha. The consultant hat comes on immediately. And I think that the realistic answer, of course, is it depends. I think one of the big challenges in this space is that there is a desire always for the one size fits all solution. And I think history has proven, and with Kubernetes we're seeing this again, that that really isn't true, of course.
I think the questions to ask yourself are, now, what's the kind of complexity of your overall environment? And, of course, also, what's your desire or capability to live in a kind of a new world?
I think the more we talk about organizations that have a non-trivial number of development teams that they want to onboard, or that are looking to maybe abstract away some of the complexity of the underlying platform, and especially, if you talk about things like environments where regulation or compliance or policy become very important, like financial services or health care or these kind of environments, most likely the investments you've made in your existing platform are sufficient. And those platforms provide capabilities that are important enough for you to try to figure out how you can use them to support Kubernetes well.
At the other end of the spectrum, if you're a small start-up team of five or six developers, and you're just interested in getting something that works really simply and that supports the one particular target platform, you're looking at well, and you don't have much need for these maybe more advanced or complicated features, then looking for something that runs inside your Kubernetes cluster, and therefore reduces your cognitive load in terms of the amounts of tools and so on that you need to maintain is a great way to start out.
CRAIG BOX: There was an article published on The New Stack a while back that said the best CI/CD tool for Kubernetes doesn't exist. Are there any early leaders in the space of Kubernetes-native tooling for this problem?
LARS WANDER: As in any new space, of course we see a whole bunch of tools that are immediately created pretty much to address all the new problems that people have. And running builds or running arbitrary commands or tasks is no exception. There's a bunch of companies that are relatively well-known-- I guess Weaveworks would be one of them-- well-known for writing the blog post about GitOps. But there's a ton of tooling in this space.
I think it's fair to say that a majority of them cover the problem to a specific level of depth that was required, often for the originators of the tooling, because a lot of this tooling is also written by teams that need to make something work for themselves, and then it's open-sourced, as is very common in this space. And I think that's always a fascinating time to be involved because you see lots of experiments happening, basically. It's a very rich ecosystem.
I think the flip side of that to bear in mind is that there's still a lot of things moving around here like Kubernetes. The internal support that comes native with a platform, and the right way to provide abstractions for people to build on top of has changed a lot and changes very rapidly in such a dynamic environment. So the tooling will take a while to settle down.
And so, I think if you're adopting a, quote unquote, "native approach," it's always not quite clear what exactly that means, I think you, A, need to bear in mind that it's probably going to cover a limited set of use cases because that's often what it's designed for. And opinions on what the best way to do things are will still change very rapidly.
So you're going to be on a bit of a roller coaster ride. And that's, of course, where some of the more established existing tooling has settled down a little bit and has gone through that kind of roller coaster experience already.
CRAIG BOX: When we look at the more traditional CI tools, Jenkins obviously comes to mind, as well as hosted platforms Travis, CircleCI, for example. They all seem like they've got on the Kubernetes bandwagon, either replatforming to run on Kubernetes, or giving adapters so that they're able to do builds inside a Kubernetes environment. Are they still relevant today?
LARS WANDER: No. Absolutely. Especially with the new Jenkins X-- a build of Jenkins basically, which is even more in a sense configured for Kubernetes, frankly, for doing build within Kubernetes off pull requests, really embracing the GitOps model for pushing change into production.
One thing, though, where these tools fall short is in a one-shot model of deployment, where they handoff a change to the Kubernetes orchestration and then at this point say that the control flow from their perspective is done, where, ideally, you would like to make these kinds of things very visible to your developers to see how the rollout's going, to see why things fail, when they fail, and, as early as possible, expose any sort of friction or problems you might run into.
ANDREW PHILLIPS: I think we see-- just to continue from what Lars was saying-- I think we see a very common pattern that happens, which is that the stage one team adopts Kubernetes, or company investigates Kubernetes. The stage two, use whatever tooling they're familiar with or whatever general purpose orchestration tools they have around to build their first pattern of a software delivery pipeline, wrestle with manifests, set up lots of repositories, basically script whatever they need together to get to work.
Stage three, take a deep breath and say, it seems to work right now. And then stage four is often, OK, how do we scale this out to the rest of the organization? And I think that's very often when the more challenging questions start to pop up.
CRAIG BOX: It's at the bargaining stage.
ANDREW PHILLIPS: Exactly. That's when the challenging questions start to pop up around, OK, how much Kubernetes expertise do we actually want to expect everybody to have? How good is the debug ability of these things? How much traceability does it provide? How simple and self-service is it really?
And that's often when people start to say, well, OK, the script-based solutions or the solutions that are all based around "let me go into a repository and edit this manifest file in a text editor," maybe are suited to the teams where everybody's a Kubernetes expert. But that's probably not a realistic steup for hundreds of developers in our organization.
Arguably, Kubernetes-- the defining factor of Kubernetes in general, is that rather than providing some baked abstraction, it gives you the capability to define your own abstractions. And in the software delivery setup, it's no exception.
Like, we definitely see Kubernetes users going through this maturity cycle, where they very quickly get to the point where they recognize that they need some way to abstract away some of the underlying details and complexity. And that's where often you have to start to re-evaluate your tooling choices here because some of the native tools that you were talking about are very much based around the idea of giving you full control over the underlying assembly code, if you like, the YAML.
ADAM GLICK: What do you think are some of the most common mistakes that you see people make when they're designing their own CI/CD pipelines and processes?
ANDREW PHILLIPS: If I had to try and boil it down to a couple, maybe the most prominent one is copy/pasting an existing approach without understanding what it's trying to achieve. And so, this is somewhat similar to saying, we're going to adopt agile by buying a bunch of post-it notes and standing around a white board every day. The ritual itself doesn't make for an effective process.
And I think software delivery, and certainly software delivery to Kubernetes, is no different. I think it's very important that you understand how you want to structure the information that you're going to deliver, the artifacts, and the configuration, who needs to have access to what, cadence they have, how they're related to each other.
And then you can start asking yourself the questions of what's the appropriate tooling to use? Where do we store this information? What's the access control, et cetera, et cetera? I think just pulling down a tool and going, like, "set up all the things" is fine if you don't want to deal with a problem, and you want to come back to it later. But that's generally not an approach that scales very well in larger organizations.
LARS WANDER: It's also very common to try on your first shot to automate everything at once. And if you're first building out your pipelines and trying to get a sense for what your CI/CD pipeline looks like, it's OK for parts that you're not confident in a script to actually call out to a human.
So, if you're not sure that your validation or your canary or anything in there is actually going to catch a mistake from rolling out to production, it's totally fine to call back and say, OK, now someone needs to come in and click, OK, or needs to run this script to continue the process. Because it's much better to, at least at first, build an intuition for how these things run, than it is to try and automate and speed things up as much as possible.
CRAIG BOX: What are some of the most common misconceptions that you see when people try to apply these processes to Kubernetes?
ANDREW PHILLIPS: So I think one more to point out here that we see relatively frequently as people try to get their head around the notion of storing all this configuration in repositories, is this question of what it actually means. Like, there's a reasonably common theme of people saying, oh, I'm just going to-- my repository now becomes my state of truth. And basically, what is in the cluster doesn't matter because my repository actually reflects what is running. And then they even think about the idea of having a kind of reconciliation system that would automatically force that to be true.
And I think it's worth taking a step back when you start to think about this and ask yourself, is that realistic? And also, is that valuable? Because we're seeing more and more Kubernetes features whose whole point is to modify and maintain the cluster state over time. So imagine, for instance, a hypothetical rollout-- longer-term rollout system-- which would change the traffic to an application as you pass a series of checkpoints.
Now, as you do that, of course, the state of your Kubernetes cluster changes. And so, that means that it will naturally drift away, if you like, from the last thing you stored in your commit repository. And so, unless you are willing to get into this very complicated cycle of having your Kubernetes cluster automatically commit stuff back into a repository, which for the people who have tried it, it gets ugly very quickly, I think it's much more realistic to think of things like your repository as point-in-time snapshots that were valid at a specific point in time, rather than being a continual record of the state of your cluster at every point in time. So I think that's one of the very common ones that we see.
CRAIG BOX: Now, you're both involved with the Spinnaker open-source project which started its life as a VM-based deployment system built by Google and Netflix on top of an idea that Netflix had used in the past. Can you tell us a little bit about the process of building Kubernetes support as a first-class citizen into Spinnaker and where you think that tool is going?
LARS WANDER: So I was involved in both iterations of the Kubernetes support for Spinnaker. And the first one much more took the existing Spinnaker model as very tightly coupling the types of resources that you can deploy to certain abstractions that Spinnaker requires. So it says, you want to deploy a Docker image? You're going to have to use a replica set and one of Spinnaker's deployment strategies.
And some of the big wins there were that you really get a great abstraction from the Kubernetes manifest to a degree that makes it much more palatable to the kinds of developers that might not be coming in with a full understanding of how Kubernetes works, or what all the manifest styles entail, or really a desire to edit YAML by hand.
The issue, though, was that as Kubernetes grew in complexity and the resource types and added support for CRDs and all these things, we came to a point where we realized that in order to support the full breadth of the Kubernetes ecosystem, we had to take a little more of a direct approach in accepting Kubernetes manifest files without pushing them through the Spinnaker abstractions first.
So right now that's in a beta stage, where we have a couple of larger users validating and contributing back and helping us build that support for this new iteration of the Kubernetes support. But what we're trying to build looking forward is, again, a set of valuable abstractions that really makes Kubernetes more usable to the developers that we talked about earlier that, again, don't really need to or want to understand all the details of your Kubernetes manifest files.
ANDREW PHILLIPS: Yeah. And I think just to echo that from a slightly less technical perspective, I think Spinnaker has always-- I like to think of it-- I mean, it's talked about as a CD platform. But I think of it similarly as a kind of a deployment platform and an application management platform of basically, helping you understand what's running where. And these are both things that Kubernetes is definitely getting better at. We see work around the application CRD and so on.
But again, there's inevitably, like in any new space, there is a focus on solving the use case for somewhere between 60% and 80% of the problem range. And Lars was talking earlier about the deployment object, which is perfectly fine for the vast majority of applications.
But there's always-- and especially as your setup gets a little bit more complicated, whether you have a multi-stage rollout process or multi-cluster, or you're trying to make this work across a large number of development teams, or whatever that magic complexity angle is that everybody ends up running up against, that's when it starts to make sense to look at something like Spinnaker. Spinnaker's really well-suited as something on top of which you can build a delivery platform for your organization.
And so, the typical conversation we've seen around this space very much is, here's pushing the envelope tech company that started out with Kubernetes somewhere between a couple of months or a couple of years ago, and they're doing very well. And they've hit some kind of scaling wall with their software delivery process.
The classic quote is, everything was fine. That's what we started with. And now we're getting to the point where it's just not going to scale for the more complicated use cases that we have. And I think Spinnaker has a huge amount to offer in that area-- things that are independent of Kubernetes, necessarily, but packaged together with good support for the Kubernetes fundamentals makes a very compelling kind of package for that kind of top end of the complexity spectrum.
ADAM GLICK: You've talked about how you've been working on Spinnaker and the problems that that's helping to solve, as well as the discussion earlier about that the perfect CI/CD tools don't yet exist. What do you think are the next innovations that are going to come in CI/CD in order to help people build their software better, faster, higher quality?
CRAIG BOX: And cheaper.
LARS WANDER: That would be great. I think one big thing that we're going to see is a better coupling between various tools in the space. So there's plenty of low-hanging fruit between, for example, GitHub and something like Spinnaker for servicing information back and forth about one, the status of your rollout; two, which set of commits, not just single commit, was actually rolled forward into production.
And then, more context-specific details about, say, hey, I have these images deployed. And they are subject to these vulnerabilities. And you could take these remediations, for example, to upgrade the base Docker, or do these things in your CI/CD process to fix these things. There's plenty of tools being built, but right now I say they don't-- or I would think they don't-- interoperate quite as well as they could. And hopefully, in the coming months, years, quarters, these things start to fit together a little better.
ANDREW PHILLIPS: So I think Lars was talking about surfacing information better that exists throughout the pipeline an end-to-end fashion. I fully agree. That's kind of been a dream for a long time but has never really been done particularly well because every tool in the process has its own scope of information, but they don't really link up in a particularly nice manner.
So the classic use case being, oh, no, this application just broke in production. Let me try to find out, OK, which deployments recently affected it. Oh, OK. How can I easily and safely rollback? Oh, OK. So what were the code changes in that particular thing, et cetera, et cetera.
And if you run through that use case in your mind, certainly for myself when I was doing this a few months ago, you immediately see seven browser tabs open, and you clicking back and forth and searching for a commit ID in this other window and all this kind of stuff. So I think a lot can be improved in that area. And even if you're storing everything in a repository, this is where you have seven repository windows open. So that doesn't really make the whole thing any better.
I think another area where we can see a lot of improvement, and certainly in a more kind of technology-specific environment like Kubernetes, is this progression from "let me start simple," through to "let me handle the more complex cases." Because I think we see as a sort of anti-pattern in this tooling market, if you like, that there's a bunch of tools that will cater to the bottom end of the spectrum, the kind of scripting part of it. And then there's a few tools that will cater to the top end of the spectrum.
But from an organization or a team's perspective, having a useful gradient here-- you can imagine, like, OK, here's a new team starting out. They can go to some self-service UI. They say, what kind of tier or complexity is my application? And they pick some dropdown. And then they get a nice pre-baked pipeline with some configuration boundaries that they have to fill in.
But then as their application gets more complicated, they can maybe easily kind of break the glass on that thing and make a few modifications without suddenly having to become low-level experts and understand all the details. That's, I think, the kind of experience that you would hopefully get to. And we see a bunch of large organizations do this.
But the way they do this is they invest a huge amount of time and pay a tooling team to build this kind of stuff bespoke and ad hoc. And I think the huge promise of something like Kubernetes is that through this ability to define abstractions in useful ways, maybe we can get to this kind of experience in a way that doesn't require you to hand crank all this tooling yourself.
CRAIG BOX: Lars, Andrew, thank you very much both for joining us today.
ANDREW PHILLIPS: Thanks very much.
LARS WANDER: Thanks for having us.
ADAM GLICK: Great to have you here.
CRAIG BOX: Lars and Andrew spend a lot of time with the Spinnaker open-source project. And you can find them on Slack at join.spinnaker.io, or find links to their GitHub pages in the show notes.
[MUSIC PLAYING]
ADAM GLICK: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod, or reach us by email at kubernetespodcast@google.com.
CRAIG BOX: If you haven't sent us an email before, now would be a great time just to reach out and say hello. And if you leave us your postal address, we might even send you a sticker. You can also check out our website at kubernetespodcast.com. Until next week, take care.
ADAM GLICK: Catch you next week.
[MUSIC PLAYING]