#80 November 19, 2019

Lyft and KubeCon NA 2019, with Vicki Cheung

Hosts: Craig Box, Adam Glick

Catch all the news (and there is a lot of it!) from KubeCon NA 2019 in this week’s show. We then talk to Vicki Cheung, the conference co-chair, and an Engineering Manager running Kubernetes infrastructure at Lyft.

Do you have something cool to share? Some questions? Let us know:

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.

ADAM GLICK: And I'm Adam Glick.

[MUSIC PLAYING]

ADAM GLICK: Well, it's great to be here in San Diego and actually to be here in person.

CRAIG BOX: Yes, it's one of those rare occasions where we are both in the same room talking to one another.

ADAM GLICK: It is going to be quite a busy week. Pretty excited for the start of KubeCon, which has kicked off today. And we have a ton of news to cover.

CRAIG BOX: Let's get straight into it.

[MUSIC PLAYING]

ADAM GLICK: Docker shook the container industry this week by announcing that it was retooling as a developer company and selling its enterprise software business. First, Mirantis announced the acquisition of Docker's enterprise business, including 750 customers and 300 engineers for an undisclosed sum.

Docker then followed by announcing a $35 million cash injection, a renewed focus on Docker Desktop and Docker Hub, and a new CEO, the previous Chief Product Officer Scott Johnson. Mirantis also announced a beta release of their Kubernetes-as-a-Service offering, called Mirantis KaaS. That's with a K. The new offering covers bare-metal, edge, on-prem, public and private clouds.

CRAIG BOX: Google Cloud made a number of announcements relating to GKE this week. Preemptible VM nodes, vertical pod autoscaling, and node order provisioning are all now GA, a potent combination for running Batch workloads. To that end, a preview of Batch on GKE brings the functionality and familiarity of a traditional Batch job scheduler to Kubernetes, freeing applications from the limitations of fixed-size clusters by dynamically allocating resources.

Also launched in beta was surge upgrades, a facility to allow adding extra nodes faster during cluster upgrades. Google says surge upgrades will reduce disruption to customer workloads and can drastically reduce the time it takes to upgrade a large node pool. Announcements about Google Cloud's Anthos are being made at Next UK later this week.

ADAM GLICK: Cloud Run, a fully managed Knative service from Google Cloud, is now GA. It comes in two flavors. Google Cloud Run is a serverless execution environment that lets you run stateless containers without worrying about the infrastructure. Cloud Run for Anthos lets you deploy those same applications into a private Anthos GKE cluster, running either on-prem or in JCP. You can also take these applications and run them on any Knative service of your own choosing.

CRAIG BOX: Microsoft made a number of announcements in the last week. The open-source AKS engine tool, for deploying unmanaged Kubernetes, is getting preview support for deploying pods to secure enclaves, such as those provided by Intel SGX. Microsoft said that this feature may eventually be added to managed AKS, if it proves popular.

The Azure Container Registry has added repository scope permissions. And the Kubernetes event-driven autoscaling project has hit 1.0. Finally, Microsoft announced GitHub Actions support for publishing CNAB bundles and a CNAB operator, that runs Kubernetes clusters to pull down and deploy those bundles from a repository. You can learn about CNAB in episode 61.

ADAM GLICK: The Helm team has announced the long-awaited stable release of Helm 3. Helm is a package manager for Kubernetes, and Version 3 removes the Tiller component, which previously ran as root in the cluster. Other enhancements in the release include a new Go SDK and experimental support for OCI distribution images.

CRAIG BOX: The Istio team has announced the release of Istio 1.4, the latest quarterly release. Version 1.4 continues improving the Istio user experience, with a focus on simplification. There are also new features that improve the performance and experience of running Istio. GitHub this week announced that Istio was in the top five fastest-growing open-source projects hosted on that site and the highest-ranked cloud-native project in that category.

ADAM GLICK: If you're using Container Storage Interface drivers, be aware there has been a security release that impacts most versions of the sidecars bundled with those drivers. The vulnerabilities are medium severity and can result in unauthorized volume data access or mutation when using the volume snapshot, cloning, or resizing features in Kubernetes. Upgrading your CSI drivers to the fixed sidecars is recommended.

CRAIG BOX: Red Hat announced the Quay container registry, started in 2013 and acquired with their CoreOS acquisition in January 2018, has been open sourced. The new project, Quay, with a Q and an odd pronunciation, will be the open-source upstream of both Red Hat's commercial Quay product and the public registry, quay.io. The project also includes the Clear security scanner.

Red Hat also announced version two of CodeReady workspaces, their Kubernetes-based IDE for OpenShift. V2 adds air-gapped installation, shareable developer workspaces, support for VS Code Extensions, and an updated desktop-like UI.

ADAM GLICK: VMware released an open-source project for troubleshooting Kubernetes clusters. Crash Diagnostics is a tool to help investigate, analyze, and troubleshoot unresponsive or crashed Kubernetes clusters. In its first release, it can automatically collect machine state information from each node and bundle it for a human to analyze.

VMware also announced the release of version 1.6 of their Enterprise PKS product. This release ships with version 1.15 of Kubernetes and adds more management features, including the Enterprise PKS management console and integration into the announced Tanzu Mission Control.

CRAIG BOX: The CNCF made a number of announcements this week, including that they now have over 500 members-- including new platinum members Arm, NetApp, and Palo Alto Networks, and new gold members, Equinix and Fidelity Investments. More than 100 vendors now provide certified, conformant Kubernetes products. And if you'd like to work for one of them, there's a new CNCF jobs board.

ADAM GLICK: Monitoring company Datadog introduced network performance monitoring this week. The product provides visibility into network flows in granular detail for every component in a cloud environment, and all the connections between them, including Kubernetes pods. Datadog also announced features for distributed tracing, metrics, and two new CamelCase open-source projects, the Watermark Pod Autoscaler and the extended DaemonSet. Their popular Container Report is now updated for 2019 and gives insights into the usage patterns derived from their customers.

CRAIG BOX: Chaos engineering company Gremlin has added Kubernetes support to their Reliability-as-a-Service platform. You can configure an attack to target the service of your choice, and the gremlin will infer which containers it needs to eat to perform that test. Just don't feed them after midnight.

ADAM GLICK: O'Reilly Media, publishers of the books with the cute animals on the front, have announced that they have acquired Katacoda, an online training and playground tool. Katacoda is popular with cloud-native vendors and powers the interactive training on kubernetes.io.

CRAIG BOX: MayaData, the sponsor of the OpenEBS project, announced a new storage engine has donated to that project. MayaStore is built and optimized to take advantage of new in-the-VME storage devices and cloud volumes, and promises up to five times performance improvement with tested results of more than 10 million IOPS. Learn more about OpenEBS in episode 56.

ADAM GLICK: PlanetScale has announced CNDB, a fully managed cloud-native database based on Vitess and MySQL. PlanetScale was founded by members of the YouTube team, who first started on what would become Vitess back in 2010.

CRAIG BOX: Rancher Labs has announced that their Keys platform, also known as k3s, has gone GA. And their Rio MicroPaaS for Kubernetes has now entered beta. If you want to hear more from Rancher on these products, you can listen to episode 57.

ADAM GLICK: Sysdig announced the Cloud Native Security Hub, a web platform for discovering and sharing rules and configurations for various cloud native security tools. For any given CVE or exploit, rules to detect or secure them will be published. It launched with support for the Falco product they sponsor, but will soon grow to add extra tools.

CRAIG BOX: A tech preview of Pipeline 2.0, Kubernetes' distribution and surf destination in Hawaii, was launched by Banzai Cloud last week. To support hybrid cloud workloads, they built a hybrid cloud controller called Cloud Fusion, which lets you act on one cluster and modify many. Banzai is hedging their bets on hybrid, with four different ways to run hybrid clusters offered. Learn more about Pipeline in episode 59.

ADAM GLICK: Canonical announced that they have added high-availability clustering to their Microk8s offering and enterprise SQL database integration into their Charmed Kubernetes offering. Learn more about Microk8s in episode 60.

CRAIG BOX: Weaveworks announced Argo Flux, a collaboration between themselves and Intuit to combine the Argo CD and Flux CD tools. The announcement also says that AWS is helping bring Argo into EKS.

ADAM GLICK: Portworx announced an update to their Portworx enterprise storage management offering, by adding PX Backup, a point-and-click backup solution for cloud storage, even if the company is not using Portworx for storage. Portworx also announced PX Autopilot, for automatically provisioning storage for cloud-based clusters.

CRAIG BOX: Pulumi announced Crosswalk for Kubernetes, a collection of open-source tools, libraries, and playbooks to help organizations adopt Kubernetes. The tools are based on Pulumi's last year working with customers to get their clusters up and running. Crosswalk includes KX, a library of extensions designed to remove some of the redundancy in using the Kubernetes API, to avoid what they view as a lot of unnecessary cut-and-paste work. Pulumi also announced .NET Core support, which you will have heard hinted at in episode 76.

ADAM GLICK: Snyk, with a Y, announced the release of Snyk Container, a tool for developers to help them find and fix vulnerabilities in their containerized applications. The tool scans for vulnerabilities in open-source and operating system level components of applications and is a hosted product based on their open-source offering.

CRAIG BOX: Solo.io announced the release of version 1.0 of Gloo and their Gloo enterprise product. Solo touts the enterprise and production readiness of this API gateway and points to LDAP RBAC controls and web application firewall features that are part of the release. Learn about Solo in episode 55.

ADAM GLICK: Yelp has announced that their Clusterman cluster management tool, built for managing Mesos clusters, now also manages Kubernetes clusters. They've also announced that they're open sourcing the project and have posted it to GitHub.

CRAIG BOX: Interested in building secure and reliable systems? Have you read the "Site Reliability Engineering" book and want to know more of how it works in practice? Google Cloud has just released a preview of a new book called "Building Secure and Reliable Systems." It is free to download in PDF and e-pub format ahead of publication.

ADAM GLICK: And now time for the Kubernetes news lightning round from A to Z. A10 Networks announced a blueprint for automation of the Polynimbus secure application service.

CRAIG BOX: Agile Stacks announced KubeFlex, to aid in deploying and managing Kubernetes clusters in data centers and at the edge.

ADAM GLICK: Alibaba Cloud released version alpha two of their open-app model.

CRAIG BOX: Altinity announced their production-ready Kubernetes Operator for ClickHouse data warehouses.

ADAM GLICK: Aporeto launched new identity federation capabilities for Kubernetes and Istio.

CRAIG BOX: Arrikto announced that Minikf is now available on the GCP marketplace.

ADAM GLICK: Amazon has published a cost optimization guide for Kubernetes on AWS, offering a variety of steps you can use to cut your EKS bill.

CRAIG BOX: Buoyant launched Dive, a SaaS team control plan for Kubernetes clusters.

ADAM GLICK: Chronosphere added tracing capabilities.

CRAIG BOX: Containous launched a new ambassador program to reward and support Traefik community members.

ADAM GLICK: Datawire announced a tool for automatic HTTPS for Kubernetes ingress in Ambassador.

CRAIG BOX: DeployHub announced the release of version 9.0 of their publishing and configuration offering.

ADAM GLICK: DigitalOcean announced a container registry and a Kubernetes section in their one-click apps market.

CRAIG BOX: Fairwinds launched a new open-source-as-a-service platform, Insights, and Astro, a product for managing monitors in the dynamic environment.

ADAM GLICK: Hammerspace announced a persistent data protection offering for Kubernetes.

CRAIG BOX: Humio added streaming log management capabilities to an IBM Cloud Pak.

ADAM GLICK: Hiscale has announced the open sourcing of their app deployment tool.

CRAIG BOX: InStar added support for Rancher.

ADAM GLICK: Kubler announced multi-site orchestration, and Kubler 2.0 is now in private preview.

CRAIG BOX: LINBIT announced Piraeus Datastore, a software-defined storage offering for Kubernetes.

ADAM GLICK: Maestro released a Kubernetes management tool for multi-cluster management.

CRAIG BOX: Mattermost introduced Chatops, an open-source project for real time DevOps.

ADAM GLICK: NetFoundry announced a programmable network platform for apps at the edge.

CRAIG BOX: NeuVector announced Security Policy as a Code tool for Kubernetes.

ADAM GLICK: NS1 expanded their suite of integrations for modern enterprise application, deployment, and delivery.

CRAIG BOX: Opsani AI announced precision tuning for autoscalers.

ADAM GLICK: Oracle announced Oracle API Gateway, Oracle Logging, and Kafka compatibility for Oracle streaming.

CRAIG BOX: Radeus Labs introduced Radeus Insight, a free developer and administrative tool to visually manipulate and manage data within any Raedus deployment.

ADAM GLICK: Rookout announced a hybrid Kubernetes debugger for DevOps teams.

CRAIG BOX: SignalFX announced Kubernetes Navigator to provide AI-driven insights.

ADAM GLICK: StorageOS announced the release of version 1.5.

CRAIG BOX: STYRA announced new features for their Compliance for Kubernetes tool.

ADAM GLICK: Trilio announced support for Trilio Vault on OpenShift.

CRAIG BOX: Turbonomic announced Lemur, a new free observability tool for developers.

ADAM GLICK: Wallarm launched support for Envoy proxy and Envoy API production with their SaaS security product.

CRAIG BOX: WhiteSource announced native integrations for all container registries.

ADAM GLICK: YugaByte has announced YugaByte DB will be available as a self-managed database service on crossplane Kubernetes clusters.

CRAIG BOX: And finally, Zebrium has announced that their no-touch log monitoring for Kubernetes is now in private beta.

ADAM GLICK: And that mammoth list, my friends, was the news from day 1 of KubeCon.

[MUSIC PLAYING]

ADAM GLICK: Vicki Cheung is an engineering manager with Lyft and the co-chair of KubeCon 2019 in San Diego. Prior to Lyft, she worked at Duolingo and OpenAI. Welcome to the show, Vicki.

VICKI CHEUNG: Thanks for having me.

CRAIG BOX: You've been a founding engineer a couple of times, at Duolingo and then at OpenAI. What's it like to be the first engineer on a project?

VICKI CHEUNG: It's super exciting. And you get really good at being resourceful and scrappy. It's also very exciting, because things move very, very fast. I feel like I'm saying all the very cliche things here. But it is absolutely the experience.

CRAIG BOX: For people like myself who may not be familiar with it, what exactly is Duolingo?

VICKI CHEUNG: Duolingo is a language learning platform. It's for anyone to learn a second language for free. They also do certified language testing for people who want to apply to university.

CRAIG BOX: Adam, I believe you are a user of Duolingo.

ADAM GLICK: I've used it for a while. And I've heard that you had something to do with the naming, if not the cute owl. Is that true?

VICKI CHEUNG: Well, the owl's all our amazing designer's work. I am very not artistic. But I definitely have had a hand on our wordmark.

ADAM GLICK: For those that aren't familiar with what a wordmark is, can you describe that?

VICKI CHEUNG: It's the logo that has the actual name in it. I also did not know what a wordmark was until our designer corrected me. Because I kept calling it the logo, and they were like, that is not the logo.

CRAIG BOX: So how did you end up designing one?

VICKI CHEUNG: It's funny, because when I first started, it had sort of a smile on it. So it ended up looking really like the Amazon logo-- or wordmark, whatever. And then I pointed this out to our designer. I was like, I think it looks like Amazon. So anyway, that's why the wordmark no longer smiles.

ADAM GLICK: Vicki, have you learned any languages through Duolingo?

VICKI CHEUNG: Yeah, I extensively went through their Spanish tree multiple times as I was developing it.

ADAM GLICK: Perhaps you can share a tip with me, which is if I've gone through a certain part of the tree, but then I've lapsed in my playing of it so that it regresses you back that you have to earn your way back up to where you were, is there any quick way to get back to where you were?

VICKI CHEUNG: There's no cheating out of learning. The algorithm thinks you've forgotten.

ADAM GLICK: I'm looking for the test out button. Where can I prove that I know this already?

VICKI CHEUNG: I mean, I think there is a test out button. But you just have to convince the computer that you still know these things.

CRAIG BOX: When did you get involved with Kubernetes?

VICKI CHEUNG: It was pretty early on. So I was the first engineer at OpenAI. So when I joined, there wasn't really much of anything. There wasn't even an office. I had no laptop. I had to build the infrastructure from scratch, essentially, before all our researchers started. And we settled on building it on top of Kubernetes.

At the time, it was a sort of bold decision, maybe. Because I think Kubernetes was either 1.0 or 1.1. So we weren't super early beta adopters, but it was definitely not clear that it was going to be the standard. There were a lot of rough edges. But it ended up being, I think, the correct choice for us.

CRAIG BOX: Were there any considerations about the fact that you were building on the machine learning platform, and that would have been relatively untested on Kubernetes at the time?

VICKI CHEUNG: Oh, for sure. We got a lot of questions all the time, because it was pretty clear as we were going along and building this platform that we were doing things that Kubernetes wasn't built to do. Or at least at the time, it wasn't. Nowadays there's a lot more community support behind it as a machine learning platform.

But at the time, it was really built for cloud-native microservices, not so much for these batch use cases. So we would start doing things that the platform clearly didn't want to do, and So. We would have to customize or do some workaround to make it work. And then our researchers would be like, maybe we should use another platform.

And we were pretty open-minded. We would go and evaluate what's on the market at the time. And then every time, we would come back to, yeah, Kubernetes is probably the one we should stick with.

CRAIG BOX: As the team assembling the infrastructure for a group of researchers, what did you have to do in order to make sure that you were building a platform that was suitable for the work that they would do on top of that platform?

VICKI CHEUNG: Many mistakes were made. [LAUGH] We would build things-- and we thought we knew what the researchers wanted, but obviously that's wrong, because how could we know? So it's a classic case of engineers building things, thinking that we know better than the users. And we really didn't.

So many mistakes were made. We certainly learned along the way to tighten the feedback loop. I spent quite a lot of time, actually, just sitting with our researchers and shadowing them and seeing what it is that they actually do. And doing a little bit more incremental changes to their workflows to adapt to using the Kubernetes platform, rather than, here's a whole new world and here's your new workflow, and trying to replace everything they were used to.

ADAM GLICK: Sounds like there were some great takeaways from that. If you were to run into a similar project, where you're essentially building up infrastructure for a number of other users, what would you take away from that experience in terms of to do differently?

VICKI CHEUNG: I think I would spend a lot more time just sitting with the user and just listening and not doing anything, just watching and observing and listening. Maybe that's obvious for people who are more used to user experience research. But as an engineer, we sort of saw researchers as also tech-savvy engineers of a different type. And so we thought we understood a lot what their use cases were. So yeah, definitely assume less and listen more.

CRAIG BOX: Two years ago, you joined Lyft, and you now run the compute team that manages Kubernetes. What has the experience been like at Lyft?

VICKI CHEUNG: The experience has been great. It's very different, coming from OpenAI. OpenAI was a research lab run like a startup, I guess is the best way to put it. Lyft is more of a traditional tech company. So processes and scale-wise, it's a little bit different. But I really enjoyed it, because Lyft was pretty established already two years ago. But we had this opportunity to rebuild, essentially, the foundation of their infrastructure, using Kubernetes.

So this isn't an opportunity that comes up very often, where you go to a big place that already is running all these things at scale, but now you get to re-architect how it all works.

CRAIG BOX: Could you describe how the Lyft infrastructure has changed over those two years?

VICKI CHEUNG: Before I joined Lyft, none of our production stuff was running on containers. So we're an AWS shop, and everything was running directly on AWS. We ran into a lot of challenges, especially as we grew in the number of microservices we were operating. So this is a classic problem of why people start considering containers, is because as you have hundreds of micro services, you want to make sure that the infrastructure team can continue to provide a good basis for running these services, while the service teams can continue to be productive within the layers of their stack.

So when I came in, we were just starting to validate Kubernetes. And that's what we've been doing. And now we have production stuff running on Kubernetes. I guess that's where we are so far.

ADAM GLICK: The things that you're describing remind me a lot of the challenges that many large organizations are facing today, about an existing set of infrastructure looking at essentially re-platforming into containers in Kubernetes. How did you go about picking which things you start with? I mean, there's so many different running services and machines. How do you eat that elephant, so to speak?

VICKI CHEUNG: We actually started with all of our batch and machine learning workloads. I mean, partially, I'm most experienced with those. But another reason is because I think they have this great attribute of things being a lot more episodic, if that makes sense. You spin off a batch workload or a job or an experiment.

And then it's a lot easier to say, OK, for my next batch of experiments, I'm going to run it on the new platform instead. Rather than for something that's a long-running service, that's serving production traffic, to have to shift over to a platform slowly. So it was just a lot easier to iterate on the batch side of things.

CRAIG BOX: We spoke to your colleague Matt Klein in episode 33, who started the Envoy project while working at Lyft. What can you tell us about being a user of Envoy inside Lyft?

VICKI CHEUNG: There's a lot of engineering practices that's deeply integrated with our Envoy stack. In particular, in relation to our migration to Kubernetes, it's been a big help, because all of our servers were already using Envoy. So we didn't have to move them to a service mesh, or move them to some other networking technology on Kubernetes. We could just be pretty much transparent to the services. You're using Envoy in your old stack. We're going to lift and shift you into a container and continue to use Envoy as the proxy going out.

CRAIG BOX: When you say lift and shift, do you spell them both with a y?

VICKI CHEUNG: [LAUGH] That would have been a good tagline.

ADAM GLICK: Craig, you missed your calling in copywriting.

You've worked on some really exciting technologies, especially a lot of AI work. And despite that, you've really chosen to focus yourself on infrastructure, which is not often seen as the hot and exciting area, though certainly it's super critical. What made you decide to focus on infrastructure?

VICKI CHEUNG: I think initially-- I guess one thing about being the first engineer in all these projects-- going back to the earlier question-- is you really learn to do whatever it takes to get the job done. So you learn to be extremely flexible with your skill set. So I guess in that sense, since I was the first engineer at OpenAI, even though I wasn't an infrastructure engineer before, I essentially stumbled into becoming an infrastructure engineer because that was the thing that we needed.

And that's actually why I've stuck with it since then is because of this realization that everyone needs infrastructure. Even if the cloud is providing all these managed services or whatnot, everyone needs infrastructure to even get off the ground or get started. And it was that realization that, oh, if I'm in infrastructure, I can enable all these things. I could go from OpenAI to Lyft-- two very different companies-- but my skills would still be applicable here. And I find that very empowering, I guess.

CRAIG BOX: You were the co-chair of this week's KubeCon event in San Diego. What has that involved for you so far?

VICKI CHEUNG: The responsibilities really start in the beginning of thinking through what the trends or the themes are. So Brian Lyles, my other co-chair, and I will go through all the proposals. Even before that, we select the program committee to score all these proposals. So usually they come from our open-source maintainers or past speakers. And we make sure we have good representation in our program committee.

And then once the committee has scored all the proposals, then Brian and I go through them and take a look at what the trends are and what the tracks should be. Usually we have already established tracks for this conference. But most of what we do is looking at which tracks are going to be popular or relevant this year. And then we decide which talks to accept. That's sort of the first phase.

And then the second phase of that is really, right now, ramping up to the conference, making sure that all the content is going to resonate with the audience, and prepping for our own talks.

ADAM GLICK: How do you decide on when topic areas are getting interesting enough and there's enough interest in them to start creating a separate track? Because it started with Kubernetes, and obviously, there's so many more projects that have come out and so many things that people have interest in. When does that interest grow large enough that you decide, hey, we really need a separate track to talk about this?

VICKI CHEUNG: I think some of it is informed by our own experiences working closely with these technologies. And some of it comes from seeing what people are submitting to the conference, seeing the variety of use cases that are out there in the wild.

I think, for example, we can see the evolution of the community-- from even a few years ago or even two years ago till now-- really maturing with use cases or exploring of more sophisticated or enterprise use cases. That's really driving, for example, a lot of talks and operations at scale, or driving more talks on security or developer environments for larger teams.

CRAIG BOX: How much leeway do you have to let your own personality and preferences shine through in this role, rather than just representing the audience? Are there any things that you did to try and make KubeCon a conference that you personally would like to attend?

VICKI CHEUNG: Yeah. I think the conference itself, year after year, it gets more and more competitive. So there's just a lot of good talks that we get that we may not necessarily be able to accept. So it is very hard sometimes to make that decision. Obviously, my personal bias goes in right when we get a lot of talks in the first top 50 that are all really good, and we have to pick 10.

I can't say that it's all based on what I want to see. It's all informed by, first and foremost, the program committee. But after that, I definitely do see, these talks are all interesting, but I would personally think this is more useful or more relevant this year.

ADAM GLICK: What would you say you learned along the way?

VICKI CHEUNG: There is a lot of research that I did into other CNCF projects. I am exposed to a number of them just through my work or my daily life. But the ecosystem is expanding at this crazy rate that sometimes I find it hard to keep up if I'm not actively trying. So definitely as I see these talks into areas that I may not have to touch at Lyft, I need to go do some research to understand what it is that people are doing with these technologies.

CRAIG BOX: In that research, what have you found has changed about KubeCon over the years?

VICKI CHEUNG: I think there are a few things. The ones that really stick out to me are there's a lot about developer productivity. I think as more companies move to cloud-native microservices architecture, there is really this-- to me, unsolved-- problem of how do we keep people developing as seamlessly as they used to, with this one monolithic Flask application that you can run on your laptop, and you can iterate very quickly and use the debugger.

It's a very smooth experience. And that's largely not replicated when we move to running hundreds of microservices in the cloud. So there's quite a few projects I'm excited about that I think we'll see at KubeCon with several talks about how to evolve our developer experience as well to keep up with the trends. Another thing that I learned quite a bit about is what larger enterprises do with security and governance, and also how to scale that with the new architecture.

ADAM GLICK: What are you most excited about for this KubeCon?

VICKI CHEUNG: I'm excited to see all the use cases coming to maturity. I think this is really a trend that we've been seeing, but this is the first time I'm seeing very traditional enterprises adopting this technology, and banks and stuff like that, that those are not tech companies. They're using this technology in production. I find that as a moment for the community to be like, all right, this technology is ready for the big stage now. I'm very excited to see that coming.

ADAM GLICK: You're certainly not a passive observer of this KubeCon. You're the co-chair of it. So what are you personally most proud of in terms of the impact that you've had on it?

VICKI CHEUNG: I think Brian and I both care a lot about diversity in the community. So I'm very happy to have a hand in even just influencing little things that we do to make the community more inclusive. One small thing we did was for all the program committees. One, make sure that their time is being rewarded by CNCF. And also, two, giving people the option to donate tickets to the diversity scholarship. Those are things that I think I've gotten really positive feedback from other people who are going to the conference that I'm very, very happy about.

CRAIG BOX: We can tell that you are obviously very happy to work with your co-chair Brian. And we very much enjoyed the conversation we had with him before the last KubeCon in Barcelona. What would you say you've learned from Brian?

VICKI CHEUNG: Brian is a lot more experienced with these technologies and this community than I am. I work in more of an end-user company, versus he is more involved in the upstream community. I really like that we can share our different perspectives on where we see the technology going.

He can see more upfront and up close to who the people are in the community and what are they doing and where it's all going. Versus me, I'm like, these are my needs right now. And I want to see these things in the community. So we're coming from different ends of the spectrum, and I really like that difference.

CRAIG BOX: What things have you learned along the way that will be advice you pass on to your co-chair for the next conference?

VICKI CHEUNG: I think the one thing that I feel like I learned along the way that would be good to have started from the beginning, is looking for the unexpected content. If you can see what I see, which is a long list of talks that are rated with some scores by our committee-- and they're all ranked, sorted in this long list-- you'll find that the top are usually all the trending or popular talks. And then there are some more divisive talks in terms of the scoring, because they're more niche or maybe more unexpected or surprising.

So sometimes, I like to mix it up a little bit and look at those talks that are not necessarily the top of the top, but look down a little bit for the more unexpected ones. And there might be gems there. So that's sort of the advice I would give.

ADAM GLICK: As you look forward to the next KubeCon, what would you like to see change? And what have you learned that you would apply to that?

VICKI CHEUNG: I think it's getting harder and harder to get into KubeCon, because there are more and more projects in CNCF that it's hard to get them all covered. So one thing I'm really excited to see is this idea of Kubernetes Days by CNCF, which is the smaller version of KubeCon that's more regional.

I think that will allow us to funnel more talks to the community, without rejecting them to KubeCon. They'll still get their chance to shine within the community. And so hopefully, I want to see more diversity in topics in KubeCon. And then the ones that are more specific topics that are really popular, we can do things like Kubernetes Days to address those hot topics, if that makes sense.

I think, historically, we haven't had a lot of necessarily deep dives into technology in KubeCon, just because Kubernetes was still relatively young or relatively new. So a lot of the audience was just dipping their toes into the technology. But now that more and more people are adopting this in production, there's actually a lot more sophisticated use cases. I like to see talks that are doing really deep dives that are maybe catered to more advanced or power users of this technology, so they can have good takeaways from the conference as well.

CRAIG BOX: Vicki, thank you so much for joining us at this busy time.

VICKI CHEUNG: Thank you for having me.

CRAIG BOX: You can find Vicki on Twitter, @vmcheung, as well as in the keynotes over the next few days. Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you're just listening online, please subscribe. If you have any feedback for us, you can find us on Twitter, @kubernetespod, or you can reach us by email at kubernetespodcast@google.com.

ADAM GLICK: You can also check out our website, kubernetespodcast.com, where you'll find transcripts. and show notes.

CRAIG BOX: A lot of show notes.

ADAM GLICK: Indeed. Until next time, take care.

CRAIG BOX: See you next week.

[MUSIC PLAYING]