#34 December 17, 2018
Adam and Craig end the year by talking to Jordan Liggitt, the member of the Kubernetes Product Security Team who fixed the recent critical security vulnerability in the Kubernetes API server. We also take a look at the news from KubeCon.
This is our last episode for 2018. Thank you for your support this year, and we’ll be back on the 8th of January!
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi, and welcome to the Kubernetes podcast from Google. I'm Craig Box.
ADAM GLICK: And I'm Adam Glick.
CRAIG BOX: We've reached the end of the calendar year. What are your plans for the holiday season, Adam?
ADAM GLICK: Oh, I'm looking forward to taking some time off. We'll have the next couple weeks off. For those of you who've been loyal listeners, thank you for listening.
CRAIG BOX: Indeed.
ADAM GLICK: And after this week's show, you'll get a couple weeks reprieve. I'll be taking some time to go off into the snowy north and enjoy some time with family. How about yourself?
CRAIG BOX: I'll be taking the time to go through the cupboards and start spring cleaning early. Plus watch a lot of TV, I think. It's a nice calm, small Christmas here.
ADAM GLICK: Definitely some time to catch up on some stuff. I have a whole bunch of reading to do around the container ecosystem and some of the new technologies out there, as well as--
CRAIG BOX: Boring!
ADAM GLICK: --spend some time doing some gaming. I have committed myself to trying to get significantly further in Evoland 2. I'm now in the side-scrolling endless runner phase, which I hope is not actually endless.
CRAIG BOX: Have you made your way up to 1990 yet?
ADAM GLICK: Yes. Yes, I am--
CRAIG BOX: How far does Evoland 2 go?
ADAM GLICK: I am in the '90s, definitely, but not too far beyond that. And also enjoying some deck-building games, for those that are card players.
CRAIG BOX: For those of us who are not, perhaps, inform us what we are talking about.
ADAM GLICK: I've been enjoying a game called Star Realms, which was created by one of the folks that helped create Magic the Gathering. But it's a kind of interesting galactic battle game based on deck-building principles of getting cards, building up your deck, and crushing your opponents.
CRAIG BOX: Where I come from, a deck is something you build to put a barbecue on it and enjoy the summertime of Christmas. But where I live here, we may even get some snow, which I do think that Christmastime with the snow is much better. And the Southern Hemisphere should probably move Christmas to the middle of the year.
ADAM GLICK: There you go. There's the campaign for next year. Craig and Adam--
CRAIG BOX: Season zones.
ADAM GLICK: --campaign to move Christmas. We'll see how that one goes.
CRAIG BOX: While we wait, let's look at the news.
Etcd, the distributed data store that underpins Kubernetes, has been donated to the CNCF. Etcd is a distributed key value store that provides a reliable way to manage the state of a distributed system, like a Kubernetes cluster. Inspired by papers on Google's lock service, named Chubby, and the Raft consensus algorithm, etcd was announced by CoreOS in June 2013. It has been the storage engine for Kubernetes since that project's creation. And the relationship between the two is discussed in the new blog post by Gyuho Lee and Joe Betz. The donation is credited to Red Hat, who acquired CoreOS earlier this year and enters the CNCF at the incubation stage.
ADAM GLICK: Looking at KubeCon, GeekWire asked, has Istio become the new cloud native darling? It sure looks like it to us. Google released Istio on GKE, making installing Istio into a cluster a one-click affair for new and existing clusters.
CRAIG BOX: VMware announced a beta of NSX Service Mesh, an Istio distribution. The product is slated to launch for their cloud PKS service in early 2019, and they telegraphed support for on-prem PKS and other platforms later next year, as well as federated meshes.
ADAM GLICK: Aspen Mesh, a subsidiary of F5 Networks, released an open beta of their self-titled product, positioning themselves as the enterprise-ready distribution of Istio. Aspen Mesh is a SaaS service which is connected to by agents that run in your cluster and promises extra RBAC controls and multi-cluster management. The beta is free and available now.
CRAIG BOX: Commercial network vendor A10 Networks also announced their A10 Secure Service Mesh, which bucks the Istio trend and uses Nginx as its proxy server. It's available for a 30-day free trial.
ADAM GLICK: Serverless, and especially Knative, was another hot topic at KubeCon. Google, SAP, and IBM all reiterated their commitment to the project.
CRAIG BOX: Red Hat announced that Knative will be supported in OpenShift and Preview early next year and provided manual installation instructions on MiniShift, their VM version, in the meantime.
ADAM GLICK: Pivotal launched Pivotal Function Service, or PFS. PFS is advertised as a multi-cloud enterprise version of Knative. You can apply for access for an early preview.
CRAIG BOX: GitLab announced GitLab Serverless, integrating the deployment of serverless functions and applications on any cloud or infrastructure straight from the GitLab UI. The integration was built in partnership with TriggerMesh, who we spoke to in episode 28, and is due to launch in late December.
ADAM GLICK: In other serverless news, Oracle launched Oracle Functions, based on the FN project, and Microsoft Deis Labs announced Osiris, a general purpose, scale-to-zero framework for Kubernetes. Sticking with cloud vendors, Oracle announced the Oracle Cloud native framework, including a variety of hosted services including Resource Manager, a hosted Terraform provider, and Stream, a scalable multi-tenant pub/sub platform.
CRAIG BOX: Microsoft announced the characters from Deis' "Children's Illustrated Guide to Kubernetes" have been donated to the CNCF. They gave one of the more atypical keynotes of KubeCon history, spending 20 minutes reading a bedtime story to Kubernetes professionals. The new story, "Phippy Goes to the Zoo," follows the earlier story, where a pirate owl captures a giraffe, keeps her in a crate in the hold of a ship, and performs cloning experiments on her.
ADAM GLICK: My hat's off to the person who decided that a giraffe with a hippo's head is the right analog for a CRD. Microsoft also announced Azure Monitor as GA. As the name suggests, the service monitors the health and performance of Kubernetes clusters, hosted on Azure Kubernetes service.
CRAIG BOX: Digital Ocean has made their managed Kubernetes service available to all. The service was originally announced in May 2013, and early access and has now been opened up to everybody. Digital Ocean has also integrated the service with their load balancers for an additional fee. Speaking of infrastructure providers, Linode also announced support for Kubernetes through their command-line tool, which will provision clusters using Terraform with support for their load balancers and block storage volumes.
ADAM GLICK: VMware has closed its acquisition of Heptio. They're taking this opportunity to tout this as part of their continued commitment to the Kubernetes community. VMware's 10-Q disclosure form puts the acquisition price at $550 million. Additionally, VMware's owner, Dell, has gained shareholder approval to go public again, which it will do through acquiring all the tracking stock in publicly traded subsidiary VMware, which was acquired in 2015 through EMC.
CRAIG BOX: Quick fire Kubernetes security news! Neu Vector announced support for containerd and cri-o runtimes in the container firewall.
ADAM GLICK: Aqua's container security platform is now certified to cover the Kubernetes CIS benchmarks.
CRAIG BOX: Lacework announced their configuration scanning platform now supports Kubernetes.
ADAM GLICK: Sysdig released Sysdig Secure 2.2, which adds Kubernetes audit events and the ability to block deployments using admission controllers.
CRAIG BOX: Twistlock released 18.11, which introduces visualization of service accounts for Kubernetes and Istio support, including enhancing the service view with Istio data and compliance and security configuration checks.
ADAM GLICK: Moving on, Grafana Loki has been released on GitHub. Loki is a horizontally scalable, highly available multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate, as it does not index the contents of the logs. Instead, just indexing the labels of your log stream. Loki uses Grafana for its UI and Promtail for its agent together logs.
CRAIG BOX: Originally named Tempo, Loki was renamed just before its launch. Loki, the trickster God from Norse mythology, is today more known for his appearance in the Marvel movies. Loki joins other Marvel-themed projects such as Thanos, the highly available Prometheus back-end that threatens to snap its fingers and delete half your metrics.
ADAM GLICK: Wanting to remind the world that there is more to their company than Ice Cube, Mesosphere announced Maestro, an SDK for Kubernetes operators. Maestro takes the learnings from various Mesos frameworks and adapters, built from Mesos' DC/OS, and applies the same logic to CRDs in Kubernetes. You can define state changes in your own application and then various hooks that will cause code to run on various lifecycle state changes.
CRAIG BOX: Stephen Day of Cruise has released RBAC sync on GitHub. RBAC sync is a tool built for Cruise, a self-driving car company owned by General Motors, which lets you sync role bindings from G Suite groups.
ADAM GLICK: And finally, Planet Scale has come out of stealth with the beta announcement of their database as a service based on Vitess. The founders created Vitess while working at Google and YouTube. The open source project currently is in incubation phase in the CNCF and scales horizontally like NoSQL but with the reliability of MySQL. Planet Scale raised $3 million in an April seed round to run the tests in their own offering, as well as on public clouds. The founders have promised to uphold the principles of open source and keep the query API open so customers don't have to worry about lock in.
CRAIG BOX: And that's the news.
ADAM GLICK: Jordan Liggitt is a staff software engineer at Google and a member of the product security team for the Kubernetes open source project. Welcome to the show, Jordan.
JORDAN LIGGITT: Glad to be here.
ADAM GLICK: Let's start by talking about KubeCon. We've summarized the news, but we weren't there on the ground for this KubeCon, unfortunately. Can you summarize the mood?
JORDAN LIGGITT: Yeah. It was bigger, better. It was lots of excitement. The number of people there and the number of categories of people there were just kind of amazing. You saw customers and vendors and contributors and people there for the technical side and the business side. It was pretty fantastic.
CRAIG BOX: What technologies, in particular, would you say were buzzing?
JORDAN LIGGITT: The two that I heard a lot about, things on the data plane service technologies like Istio and Envoy were getting a lot of attention. More and more applications are being brought into these cloud environments. Some of their old assumptions about how networks and security and monitoring and all of these essential aspects work are being challenged. And so people are really excited about tools to help address those issues without having to make changes to all these applications.
And the other category that I heard a lot about is kind of less exciting but just as important, and that's kind of Day 2 maintenance of clusters. So whether it was the security issue that just came out or just people having experienced trying to keep their clusters updated and running for a year or so now, Day 2 maintenance is something that is being promoted as a feature and value add for hosted providers and on-premise solutions.
ADAM GLICK: For you, personally, what did you find most interesting at the show?
JORDAN LIGGITT: I loved KubeCon as a chance to connect with the members of the community that I've been working with over the previous year. So I call it the hallway track. That's the best part of the conference for me, getting to sit down and talk face-to-face with groups of two or three or four people and actually work through things we've been trying to get done over the past year.
CRAIG BOX: Were there any talks that you saw that you'd recommend someone must see on YouTube?
JORDAN LIGGITT: The keynotes. Kelsey Hightower's keynote and Julia Evans' keynotes were especially good. A lot of the individual tracks I didn't make it to, I'm counting on seeing on YouTube. So I'm glad they posted them so quickly. But those two keynotes, in particular, were really, really good.
ADAM GLICK: How did you get involved in Kubernetes to begin with?
JORDAN LIGGITT: Well, I was at Red Hat working on OpenShift prior to Kubernetes. And so when Google open sourced Kubernetes, Clayton Coleman tagged me and a couple of the other OpenShift engineers to start working on this. So this was back in 2014. So it was very early in the project, very small. It was kind of this just getting started thing. And I was working on a lot of the security and API portions of the product. And that's how I got involved.
CRAIG BOX: How did you get involved with security of Kubernetes?
JORDAN LIGGITT: A lot of the things early on in the project was identifying areas that were important but weren't getting the attention they needed. And so because OpenShift was an existing product, it didn't have as much leeway about kind of saying, well, this is version 1.0. We don't have to get all the whole story in place for the first release on top of Kubernetes. And so it was very much trying to get kind of feature parity from the things built on top of Kubernetes that already existed. And so several of us at Red Hat were very involved in getting the authentication and authorization stories in place.
And so as Kubernetes matured and more people started to care about that-- you know, versions 1.3, 1.4, 1.5-- the group of people who had already been paying attention to that over the past year or two had kind of coalesced into SIG Auth. And then, as we're putting together a team to respond to security issues, the same people were tagged. And so that's how I got included in that group.
ADAM GLICK: What is the Kubernetes product security team?
JORDAN LIGGITT: The product security team was born sort of out of necessity issues that were reported probably 2015/2016. We realized we didn't really have a great process for handling those and an identified group of people who were going to handle those. And so Brandon Phillips really took the lead in putting that together and set up the initial group of people and started to write out how are we going to handle issues when they're reported. Make sure they get fixed. Make sure they get communicated well.
CRAIG BOX: In the last few weeks, we've been dealing, as a community, with the very easy-to-parse name of CVE-2018-1002105. I'm sure we will have all heard of it, maybe not by its number but by its reputation as a critical security vulnerability in Kubernetes. What exactly was that vulnerability?
JORDAN LIGGITT: It was a TCP connection reuse vulnerability, which is a fancy way of saying that one connection that was correctly authorized could then be reused in ways that were not allowed. So this was a vulnerability in the way the API server proxies to other servers. And so a couple examples of that are the API server proxying to aggregated servers or to the kubelet. And so as requests that the API server proxies through were allowed, an attacker could trigger that connection to stay open and then reuse that connection and bypass the API server controls around authorization.
CRAIG BOX: There's two major areas where we've seen people discussing this vulnerability. One is with the API server and proxying to aggregated servers, like the metrics server, for example. And the other one is the kubelet on the node itself. Can you talk about the differences between how the exploit applies to both of these environments?
JORDAN LIGGITT: Sure. So when you're proxying to an aggregated server, typically that API server, that aggregated server, is providing some API service of its own. And in some cases, like the metrics server, it's a read-only API, so you're not really worried about people persisting data or making changes. But it might be an information exposure problem. And the way that that exploit was allowed was via anonymous discovery requests.
So we have discovery APIs that let you understand what APIs are served. And you could proxy those discovery API calls through and trigger this vulnerability. To the kubelet, the way that you speak to the kubelet is also through the API server. And you do that when you're making exec attach port forward requests. You may have seen this with kubectl exec, kubectl attach. Those requests also go through the API server and are proxied to the kubelet.
And so that usually requires a higher level of permission. People don't normally let anonymous users do those things. But if you're running a cluster that has multiple users and you don't expect all of them to have full control over the kubelets, then being able to escalate from exec-ing into one pod to running things in any pod on the node is a permission escalation.
CRAIG BOX: What was the timeline of reporting of this vulnerability?
JORDAN LIGGITT: So the issue was originally filed as a bug against Rancher. People were seeing kind of strange behavior in some load balancer cases. So this was back in August. That was filed as a bug against the Rancher component. And it took them several months to kind of reproduce and figure out what was even going on, because it was very edge case driven. It was an error case in an error case when it was seen accidentally.
And so in November, on November 1, they finally reproduced it and understood the core issue. And as soon as they did, kudos to them, they kind of understood that this could have security implications. And so they reported that to the Kubernetes product security team privately, saying, we saw this behavior. Here's the bug. Here's a way to fix it. And we wanted to let you know first. And so that was November 1.
Within a couple days, we had acknowledged the report. We had actually come up with a proof of concept exploit that demonstrated how critical this issue was. And at that point, the security release process kicked in. And we started developing the official fix, contacting distributors, and going from there.
ADAM GLICK: How did it fall to you to fix this bug?
JORDAN LIGGITT: So the product security team round-robins how we handle reports. When an issue is reported, one of the product security team members will take it, respond, and then reach out to the relevant developers to develop the fix and understand the extent and how we're going to go about fixing the problem. Because this was an area that I actually work in, also, I not only was the product security team person handling the fix, but I also was helping put together the fix and coordinate with the other people that work on the API server.
CRAIG BOX: What did you actually do? Which code did you change or what new function did you implement to fix the error?
JORDAN LIGGITT: The vulnerable component was the proxy component in the API server. And so it was a case where an error response from the back end wasn't being checked for. Assumptions were being made that once this upgrade request came through, then we were just going to proxy the rest of the data through. And so the change was actually quite small. If you look at the patch, it was just one function, a few lines. It was checking an error condition and then testing and returning an error response.
It's always fortunate when a change like that is small, especially when it's something we're going to have to back port. The worst case is when there's a tricky problem with a big change that has to be back ported in areas that have changed across releases. But in this case, it was a very contained change, which was great.
CRAIG BOX: Once you had tested the change, what was your disclosure process to vendors and then to the community?
JORDAN LIGGITT: There's a distributor announce list that distributors can join. There's a process for joining that listed on the security response page. And so distributors on that list received advance notice of the fix. Basically, the information that is in the CVE issue that is now public, they received that in advance, along with the patches that would be released to the open source repo on the 26th. They had two weeks, I believe, to apply those patches and distribute fixes, both to their hosted platforms and to their customers.
And the timing of that was tricky, especially because it fell around Thanksgiving. So we were trying to find a way to get the fix out responsibly and not force people to roll out changes over holiday freezes or over Black Friday, but also get the information out to distributors and to the community as quickly as we could.
ADAM GLICK: This particular security issue is something that received a fair amount of press. Has there been any evidence that this vulnerability has been exploited?
JORDAN LIGGITT: Prior to the public disclosure and fix, we didn't see any evidence of this being exploited. One of the reasons this was seen as critical, though, was because it would have been difficult to detect. A lot of the normal auditing and logging mechanisms we have only take note of requests that the API server handles. And so because this was a TCP reuse bug, the subsequent exploited requests wouldn't have shown up in those logs.
Since disclosure, there have been reports of cryptomining workloads running on people's clusters. It hasn't been determined yet if those are a result of this or just other unsecured clusters. Unsecured clusters are a problem in general. People don't take proper steps to lock their clusters down. And so we are seeing reports, but it's unclear whether it's because of this or because of misconfiguration.
CRAIG BOX: If all I had was a Kubernetes API server available on the internet where I had some anonymous access would let me, perhaps, query the discovery endpoints, but no other configuration was allowed for users-- the cluster was thought secure at the time-- would an attacker be able to use this exploit to run arbitrary code on the cluster?
JORDAN LIGGITT: The anonymous requests would be able to make calls to aggregated API servers. So it depends on whether you had an aggregated API server that would let you launch workloads. Most default configurations, the only aggregated API server is the metrics server, which is a read-only API. So in typical configurations, anonymous users might be able to discover some things about your cluster through the metrics API but would not have been able to run arbitrary workloads.
CRAIG BOX: Do we think open source projects are likely to have the same class of bug? And is there anything that we're doing to reach out to them and identify here's a test that you didn't think about and the problem that you might have?
JORDAN LIGGITT: That's a great question. Not only do we sweep our own code bases, but we actually did look at other code bases, as well, and did report similar issues to some other projects. I'm going to not mention which projects those are, as we're still working through some of those processes. But yes, we do try to be good citizens in the open source community and disseminate the knowledge that we get from our own vulnerabilities out to other projects.
CRAIG BOX: Only the last four versions of Kubernetes were patched, though this affected versions going all the way back perhaps to 1.0. Some people have talked about back-porting to previous versions. Do you think that issuing a patch for all previous releases out of band would have been a good idea? Does the project even have the ability to do this?
JORDAN LIGGITT: Aggregated API servers came in around 1.6, 1.7 is when they started being able to be enabled by default. So the anonymous exploit path dated back to probably 1.6 or 1.7, if I recall correctly. As far as patching versions prior to the three versions that are maintained, there has been discussion around that. The main difficulty around that is staffing and CI. Maintaining those two things for lots of releases is probably more than the project can sustain at the moment. We don't want to patch and release things to older releases that haven't gone through the same set of CI and testing that we can actually confidently say, this is a quality release that you should upgrade to in production.
That said, a lot of commercial vendors did take this patch back to versions prior to 1.10. And so if you were running GKE or OpenShift or some other commercial distributions, I do know that this patch was back-ported to all supported versions for those vendors.
ADAM GLICK: Have there been similar vulnerabilities in Kubernetes previously?
JORDAN LIGGITT: There haven't been ones similar to this. So one of the things we do whenever we have a vulnerability, we do a postmortem that not only looks at how the specific issue came about but also if there are similar areas that need to be audited or corrected or better tested. We're in the process of doing a postmortem for this issue right now. And the goal of those is to try to prevent similar vulnerabilities. So, no, there haven't been ones related to this.
CRAIG BOX: What about vulnerabilities of a different class with a similar criticality?
JORDAN LIGGITT: Yes, there have been. There were issues around how client certificates were validated, I think back in 2016, that had similar impact. Someone with public knowledge could escalate the identity with which they made API calls. So, yes. This is not the first critical vulnerability. It's the first one that got a lot of press. But again, as usage of Kubernetes is increasing, the consequences for vulnerabilities increase as well.
And so that's why we take the security release process seriously. We take the postmortem process seriously. And we want to be responsible and diligent about how we maintain this component for people.
CRAIG BOX: I know you'll still be in the process of writing the postmortem, but given that it's been a couple of weeks now since this vulnerability was made public and then a few weeks before that you've been working on it, is there anything that you can immediately say that you would have done differently?
JORDAN LIGGITT: One of the things that's come out of the postmortem so far is around the communication, both with distributors and with end users. Making it clearer for people who are wanting to understand if they're affected. Giving them really concrete tests that they can run against their cluster so that they can understand whether they're affected or not and whether they need to react immediately or can take their time to do an upgrade.
ADAM GLICK: Is this your day job?
JORDAN LIGGITT: My day job covers a lot of areas. One of them is security. One of them is management of the API server and API service. And so this one actually intersected both of those. So, yeah, this is a big part of my job-- pay attention to things like this and make sure that they're handled well.
CRAIG BOX: I assume this has been relatively high stress for you personally, as well as for the whole production team. What have you done to unwind since this is all sort of winding down now for you?
JORDAN LIGGITT: Oh, man. I think, in general, the project, as it has gotten more use, there is a sense of responsibility about the things we do that kind of make everything we do high stress, in some ways. Just knowing that the decisions we're making have implications and trying to think through that. So this particular month, as we've been dealing with this, was a little more intense than usual. But it actually wasn't particularly more stressful.
We want to do the right thing by users, communicate well to them, help them understand what sort of situation they have to deal with, and get things fixed for them.
ADAM GLICK: What kind of exploits keep you up at night?
JORDAN LIGGITT: I think exploits that take advantage of assumptions that people have made, where someone wasn't even considering an entire class of possible exploits. Things where people thought a particular workload was protected by a network, and then, when brought into a new environment, it's actually in a different network, and you can no longer make that assumption. Or assumptions that a particular data source is trustworthy and that data could be injected into that and then propagated to another system. Things like that keep me up.
CRAIG BOX: When you're not working on API servers or the security team, I see your name pop up a lot on Stack Overflow answering questions relating to Kubernetes. What motivates you to participate in that community?
JORDAN LIGGITT: Whether it's on Stack Overflow or on Slack, I think helping people is just really attractive to me. I remember being new to a lot of these technologies and struggling to understand them and kind of understand how they fit together. And so if there's a way that I can take two minutes of my time and save somebody hours of their time, that seems like a force multiplier that's worthwhile.
ADAM GLICK: Jordan, thank you so much for coming on the show today and sharing your knowledge with all of us.
JORDAN LIGGITT: Thanks for having me.
CRAIG BOX: You can find Jordan on Twitter @liggitt, L-I-G-G-I-T-T, or on Stack Overflow, the Kubernetes Slack, or GitHub under the same name.
ADAM GLICK: We want to say thank you to all of you who've been listening. It's been a great year, and we appreciate all the support that we've had. We're going to take two weeks off for the end of the year. And we hope that each of you get some time to take off to spend with your friends, family, and as a way to get ready for the new year and all the exciting things that will come then.
CRAIG BOX: We absolutely, 100% promise to come back. If you've enjoyed the show this year, please help us spread the word and tell a friend. If you're going to see friends over Christmas, we recommend you do it then. If you have any feedback for us, we recommend you send that to us in January. And you can do that @KubernetesPod or reach us by email at firstname.lastname@example.org.
ADAM GLICK: You can also check out our website at kubernetespodcast.com. Until next time, have a great holiday season, New Year's, and we'll be back in your podcast feed in January 8. Till then, take care.
CRAIG BOX: See you next year.