#64 July 30, 2019
Cloud Code provides everything you need to write, debug, and deploy Kubernetes applications, including extensions to IDEs such as Visual Studio Code and IntelliJ. Joining Craig and Adam are Sarah D’Angelo, a UX Researcher, and Patrick Flynn, an engineering lead, both on the Cloud Code team at Google.
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.
ADAM GLICK: And I'm Adam Glick.
[MUSIC PLAYING]
CRAIG BOX: Last week on the show, we mentioned the Windows Container Unconference to be held that week. And you went along!
ADAM GLICK: I did, indeed. It was great to meet up with folks from all across the community-- folks from Microsoft, from Red Hat, folks from Google, Docker. Met a number of other organizations that were all there. Good to chat with folks, kind of talk about what Windows Containers are like, how they're different than what people are used to maybe with Linux Containers, and how it helps people who are used to a more Windows-focused development cycle move into the Kubernetes world.
CRAIG BOX: Held on the Microsoft campus?
ADAM GLICK: It was, indeed. Building 20 there.
CRAIG BOX: How's the food there?
ADAM GLICK: Funny note. I actually took a call and missed the ability to get into the cafeteria. So I sat down and met a gentleman who was, as I can best describe it, eating like a T-Rex. And he was explaining that he was on the full carnivore diet, and he was eating two pounds of ground beef for both dinner--
CRAIG BOX: Wow.
ADAM GLICK: --and for lunch. In fairness, that is cooked ground beef. But that was quite a sight. I was impressed by it. Speaking of cooking, how are you handling the heat over in the UK?
CRAIG BOX: Well, as you'll remember, there was a heatwave across the US the week before last. And Europe doesn't like missing out, so we had a completely unrelated heatwave last week. It was the hottest day on record in France. It was the hottest July day on record, and it was hovering around the warmest day in Britain of all time. And we were all watching our thermometers nervously to see what would happen.
And it actually took until a few days later. It has only recently been announced that it did, in fact, break the record for the hottest day on record in the UK-- 38.7 degrees, which in Fahrenheit, is well over 100. Those numbers might not seem a lot to you, but the house that I live in was built during Queen Victoria's reign. So there's not a lot of air conditioning, and it's quite hard to sleep. And it's just nice to be out the other side and able to breathe again.
ADAM GLICK: It's good to hear that it's cooled off. Shall we get to the news?
[MUSIC PLAYING]
ADAM GLICK: The Knative project celebrated its first birthday this week. The serverless platform for Kubernetes has had seven releases since launch and leads in its space with over 400 contributors from 80 different companies, including IBM, Red Hat, SAP, TriggerMesh, and Pivotal.
It has also added non-Google contributors at the approver, lead, and steering committee levels. Knative is seeing 20% monthly growth in contributions, and that's without factoring in Tekton, the Knative build components, which were spun out as a separate project for the founding of the Continuous Delivery Foundation. If you want to learn more about Knative, Tekton, or the CDF, you can listen to episodes 14, 47, or 44 from our back catalog.
CRAIG BOX: Grafana Cloud recently had an outage on their hosted Prometheus service, and have released a post-mortem for the issue. While setting up pod priorities, they configured the new application without updating all the existing apps. And pods from the new app ended up preempting pods from the old ones.
Thankfully for a logging and monitoring company, they proved no logs or monitoring events would be lost in the course of an outage. Their write-up is a good read if you're looking to implement pod priority and preemption, and generated a new entry for the Kubernetes Failure Stories site, which we covered in episode 38.
ADAM GLICK: A couple of weeks ago, we covered a blog post on Istio benchmarking, suggesting that you should test with your actual workload in mind. Bonsai Cloud have done just that, benchmarking Apache Kafka with and without Istio. They found that the throughput loss of adding MTLS with Istio, which they measured at 20%, was less than the overhead of enabling TLS on Kafka itself, which they measured at 30%. They also found that using Linkerd, which doesn't provide MTLS, reduced throughput by 45% with no improvement in the security posture.
CRAIG BOX: Docker announced the Docker Enterprise 3.0 is now generally available. The company says 2,000 people used the service in beta, and reiterated its enhancements, primarily being Kubernetes and application bundling built on CNAB, as covered in episode 61. They also announced a new technology partner program, fully completing their transformation to capital E-enterprise. We look forward to Docker Enterprise for Workgroups 3.11.
ADAM GLICK: Kubernetes tech lead and guest of episode 41, Tim Hockin, has had to reconcile himself to the fact that people don't always understand reconciliation. He posted a short slide deck on the topic this week, which explains how Kubernetes handles interfacing with other APIs, like those of clouds or containers. Good controllers should reconcile against underlying APIs regularly, and this presentation will explain why.
CRAIG BOX: Fairwinds, the company formerly known as ReactiveOps, has reacted to demand and launched Polaris, a tool for measuring the health and best practiceness of a Kubernetes environment. Polaris can validate your objects against configurations you set. For example, a pod must have a readiness check, and an optional web hook can reject pods that would cause a cluster to fall below a validation threshold. The tool is open source and can generate reports either locally or connected to a hosted dashboard called Polaris Snapshot.
ADAM GLICK: Interested in how General Motors protects their Cruise self-driving electric car software running on Kubernetes? They've posted a blog as part of a series on their security design that talks about how they utilize multi-factor authentication and their management of multiple Kubernetes clusters to build a secure service. If you're curious, you can check out their blog, and also see the best encryption at rest icon I've ever seen.
CRAIG BOX: Following Pivotal last week, we turned to Cloudera, who are also seeking salvation with Kubernetes. They announced a scheduler called YuniKorn, with a y and a k. YuniKorn is a new, stand-alone, universal resource scheduler responsible for allocating resources for big data workloads, including batch jobs and long-running services. It seeks to create a unified scheduler to unite Yarn and Kubernetes.
ADAM GLICK: The CNCF has released its summary report for the recent KubeCon in China, welcoming 3,500 attendees, up 1000 from last year. The focus on China was brought into perspective as the CNCF noted the strong interest and involvement in Kubernetes from the country, with 16% platinum members and 35% of gold member organizations being based there.
By country, China is now the second largest contributor to Kubernetes behind the United States. With KubeCon in China now showing clear traction, it looks like we'll get three main events each year to look forward to for big cloud native related news.
CRAIG BOX: Finally, being an enterprise management platform for servers these days means thinking about how to provide Kubernetes as well. Kazuhm, spelled K-A-Z-U-H-M, but pronounced "Throatwobbler Mangrove", and Morpheus Data, the official multi-cloud management platform of Laurence Fishburne, have both announced managed Kubernetes platforms this week, with the Morpheus platform certified by the CNCF.
ADAM GLICK: And that's the news.
[MUSIC PLAYING]
CRAIG BOX: Sarah D'Angelo is a user experience researcher based in Seattle, and Patrick Flynn is a technical lead based in New York, but joining us today from Paris. Both work for Google Cloud on developer tools, and specifically Cloud Code. Welcome to the show.
PATRICK FLYNN: Thank you.
SARAH D'ANGELO: Thanks.
CRAIG BOX: What exactly is Cloud Code?
SARAH D'ANGELO: Cloud Code is an SDK for application developers using Kubernetes or Google Cloud. And it includes a suite of tools to deliver an end-to-end local development loop that is fast and easy. To deliver that, we provide, from the top down, an IDE extension for Visual Studio Code and the JetBrains IDEs, CLI tools to drive the workflow and provide local emulation, and containerized support with products like Jib.
CRAIG BOX: How did you pick which IDEs to integrate with?
SARAH D'ANGELO: We chose VS Code and the JetBrains IDEs. We did some research, and saw that those are the most popular IDEs amongst developers. And we wanted to start somewhere and reach a large audience, so we're working with those.
CRAIG BOX: Why not put all the wood behind one of those arrows?
PATRICK FLYNN: To be honest with you, we would really love to not have to do more than one IDE, but it's just not going to work to really reach the audience that we want to reach. The fact of the matter is that a lot of enterprise users are using JetBrains IDEs. And then we have VS Code, which is coming up really fast, but still not that prevalent in large enterprise. So if we picked either of those audiences, we would have missed too many people. So we figured this was a good balance of both the existing enterprise audience and the fast-growing VS Code user base.
CRAIG BOX: Did you ever contemplate building one from scratch?
PATRICK FLYNN: It has been discussed, and Google has experimented with it, but not seriously at this point.
CRAIG BOX: Quite often in the context of development, we'll hear people talk about an inner and an outer development loop. What are those two loops, and which one is Cloud Code addressing?
PATRICK FLYNN: The inner loop is the prior to push and code review loop, where it's the one that the developer spends most of their day in-- coding their application, running tests or running the application to see the effect of their change, and then going back to coding to address any issues. The latency of that inner loop is really important in terms of how fast they can get from making their change to actually seeing the effect of their change on the unit test, integration test, or, let's say, the code running in an HTML page.
The outer loop is the CI and review process loop. If you're using GitHub, it would be-- once you've done the push and you've sent the pull request, how long does it take to get that code actually merged into the delivery system and pushed out to production? So at the moment, Cloud Code is extremely focused on the inner loop, because that's where we think we can provide the most value, and where people are experiencing the most pain, and in fact, where developers spend most of their time.
Once we feel like that inner loop is optimal, right? And that means as good or better than the native development experience, we'll definitely start considering outer loop scenarios, which we think can be really, really incredible to start taking the information that's in the delivery system or even running in production, and surfacing that somehow in the developer's IDE while they're developing their code.
CRAIG BOX: So while people are working on their inner development loop, their local development-- previously, people would have built their application in just the tooling provided by the programming language. Now we're targeting Kubernetes as our end development environment.
We see people wanting to do development in Kubernetes locally, be it through something like Minikube or the Docker tools installed locally. Why is it important to build and test your application locally on top of Kubernetes, or in containers, rather than just compiling it, running it, testing it that way?
PATRICK FLYNN: In talking to users, I was really expecting people to do more of just using their native tooling and waiting, basically, for their CI environment or their shared dev environments to actually do testing in their real Kubernetes cluster. But when I talked to a lot of users, I discovered that most of the time, they actually would use Docker, and frequently Minikube, or a user-specific cluster that was running in a cloud, or even on prem.
And ultimately, I think the reasons for that were, once you start using an enabling technology like Kubernetes, your architecture starts to change. You start having more smaller services following the microservice mantra. And you have a lot of the same problems around managing those services and production applied to when you actually want to run them locally.
So people really leverage the existing Kubernetes infrastructure in the config that they've created to actually have a high fidelity environment locally. That being said, I still think that for some use cases where you're porting in a node app or a WAR that runs on Tomcat, and just containerizing that, and it happens to run on Kubernetes, you might just as well just use your native tooling.
CRAIG BOX: Is there anything specific to the Kubernetes environment that people need to worry about when they're beginning to write code?
PATRICK FLYNN: There are a few things. For instance, what Kubernetes has enabled is a practice of pushing a lot of concerns that typically existed in the application software stack down into infrastructure. An example of this is Istio, which is a service mesh, which is a way of describing a lot of the cross-cutting networking concerns are managed at the infrastructure level-- things like security, like quota, but also like circuit breaking.
So if you've ever used Spring Boot, for example, which has this built-in circuit breaking support, if you're using Istio, you probably wouldn't want to actually use the circuit breaking at the application level. You'd want to use it at the infrastructural level. And that means that if you want to test it, then you actually need Istio running, which means you need Kubernetes.
CRAIG BOX: So this is a tool that's developed by Google to run on people's local computers. And Google don't generally do that. Most of the stuff that we do is a web service or something that runs in the browser. What considerations have you had to make to build something that's distributed to run on people's local machines?
SARAH D'ANGELO: So I think this is an interesting question from both UX and a technical perspective. From a UX perspective, we have to think a lot about the constraints of the platform, like what can we do in Visual Studio Code or in the JetBrains IDEs. And what do those interfaces look like, and how do we fit well into the user's existing experiences within those IDEs that are not Google? So you're not having an entirely Google experience, but we're embedding some experiences in there. So there's some interesting problems from a UX perspective there.
CRAIG BOX: Can you talk through one of those problems?
SARAH D'ANGELO: Yeah. So I think one thing that we don't always know is how the environment is configured for the user. So what other plugins do they have installed that are working with or in different ways than Cloud Code? And how does that whole environment affect their user experience?
And we can't necessarily account for even simple things like how they've changed the color configurations of their IDE, which is something popular for developers. And so from a visual perspective, it's difficult to anticipate what it's going to look like on the user's end. And from a user flow and experience perspective, we have to be flexible, which just makes it a lot more fun to design for.
CRAIG BOX: Are you a light mode or a dark mode person?
SARAH D'ANGELO: I'm definitely a dark mode person.
CRAIG BOX: And Patrick, from an engineering perspective?
PATRICK FLYNN: I spent almost my entire career building services, so it definitely was a shift to actually start shipping this that people installed on their machines. And it involved changing a few things that I took for granted. Like a delivery mantra of delivering as often as you can is just impractical when people are either installing locally or getting update notifications and having to trigger those updates. You can really lose your user base very quickly if they see these update notifications on a daily basis.
So that means, ultimately, finding the right cadence where we can prepare our releases and deliver it to users in a tolerable way, and also means that when things go wrong, there's more latency to getting the fix out, right? And so we do a lot more preparation and testing and validation of our release candidates than I typically did when I worked on the services side of things.
The other thing I would say is that, at least for the IDE extensions, those platforms, like VS Code and IntelliJ, each have their own peculiarities when it comes to sandboxing and dependency management. And so figuring out how they work and figuring out how to package and modularize your application to target those platforms is definitely a challenge.
CRAIG BOX: Both of those applications are cross platform. They run on Windows and Mac and Linux. Are there considerations you have to make for the add-ons you make, to make sure they run across those three platforms?
PATRICK FLYNN: For VS Code, the extensions are written in TypeScript, and for IntelliJ, they're written in Java. The platform native considerations tend to be around the version of Java, for instance, on IntelliJ. And that's relatively easy to manage. But we do use some actual system native dependencies. And in those cases, we have to package multiple platform binaries so that we can call out to the right one depending on the platform.
CRAIG BOX: Developer tools are something that all developers use, and so it's easy to assume that they'll all have an opinion on how they should be built. Do you find that working on these tools, the feedback from the customers and the people you're working with-- are they always coming at it and saying, all right. I have an opinion on this. Or, are they willing to listen to research and make sure that you get something which is not just targeted for them?
SARAH D'ANGELO: Yeah, so I think that's an interesting question, especially with Kubernetes developers, since it is new. And I think the field is still coming up with best practices. So I think our users are really interested in hearing from us and others, what we think the best practices should be and how we can provide suggestions and opinions to have a more efficient and effective workflow.
So I think what I have experienced from talking to customers and users is that they're really open to experimenting and trying new things. And I think with these plug-ins, we're really hoping to try to provide suggestions on effective workflow for developers and reduce the barrier to entry with Kubernetes.
CRAIG BOX: Is there anything specific that you've learned from the way that Googlers do development?
PATRICK FLYNN: Yeah. I think, ultimately, I've been working at Google for almost 12 years. And when I first started, I was really impressed by the machinery around process and development that just gave you a lot of the best practices that Google expected you to follow for free, as long as you fit into that machinery.
And I think the analog to that is the "shift left" movement and the continuous delivery movement. So a lot of those practices you see more broadly in the ecosystem today. But when we think about those things with regard to Cloud Code, it just means that we focus on the use cases that make sense in this best practices, continuous delivery process-- or I should say software delivery process. It's not necessarily continuous.
CRAIG BOX: What exactly is the "shift left" movement that you speak of?
PATRICK FLYNN: So "shift left" is-- at least my interpretation-- is of doing analysis and understanding failure earlier in the process-- so having tests run as your coding, so getting the CI feedback earlier. Finding out that, let's say, your Kubernetes configuration is invalid before you're actually trying to run it on a cluster. So Google has a lot of automated tools to do large scale analysis on code bases and on change lists, which are the Google equivalent of pull requests.
And we really do want to help developers in their IDEs actually catch all these errors and bugs in their code, and not even necessarily bugs, but poor practices and poor security practices really early in the process, rather than having to follow some documentation or discover after trial and error that their current configuration isn't working for them.
CRAIG BOX: Do you see that as something that will run in everyone's IDEs, or is that something that would run centrally in the server?
PATRICK FLYNN: I don't think that the code for that will necessarily exist in the IDE, but I believe that the first interaction with a result of that analysis is going to happen in the IDE. So that's why we've designed these extensions to share a lot of code through services and through CLI tools.
CRAIG BOX: This thing is called Google Cloud Code. Is it for Google Cloud users only, or is it for everyone who runs Kubernetes?
SARAH D'ANGELO: Cloud Code, our top priority is to support Kubernetes users, so we support users with any Kubernetes end point. We have some special integrations for GCP, AWS, and Azure, like cluster creation. But we are trying to meet any Kubernetes user where they are. But we do also have GCP specific services that integrate well with Cloud Code, so you can look out for those as well.
PATRICK FLYNN: Yeah. I would kind of reiterate that we have two missions. It is crucial for GCP users to have a good application development experience for them to have a fantastic Kubernetes experience. But beyond that, we really want to build-- and we're really motivated to build all of our Kubernetes support in such a way that people can use it using any cloud or using on prem, just to help that Kubernetes ecosystem grow.
CRAIG BOX: Now, there are a number of Google open source tools which, in this space, many of which are integrated into Cloud Code. So I want to ask you about a few of them here. How does Skaffold fit into Cloud Code?
PATRICK FLYNN: Skaffold predates Cloud Code. It turns out that we built a set of container tools that were providing point solutions to problems that we saw for people targeting Kubernetes. And conveniently, we have tools that have turned into the exact things that we needed to provide the Cloud Code end-to-end experience in the IDEs.
Skaffold is ultimately a workflow tool. It runs as a kernel. It has a CLI. And it watches your project for changes, rebuilds them, and will then run them in Kubernetes for you. It can also do magical things like configure your containers for debugging so that the IDEs can then attach a debugger to them, or sync resource files, or source files, into a live running container. And those are all use cases that are really useful for the IDE.
CRAIG BOX: Another tool you call out in the Cloud Code web page is Jib. What exactly is Jib?
PATRICK FLYNN: When we look at the end-to-end experience for a Java developer, for instance, who suddenly wants to target one of these cloud native container orchestrators, the first thing that he or she needs to do is actually take their Spring Boot app and containerize it. But that's not at all easy at the moment. So what Jib does is it builds containers in a way that's natural to Java developers.
It's a plug-in for Maven and Gradle that you can just add to your pom.xml or build.gradle file, and it will containerize your Java app with almost no configuration, and will do that in a way that is optimized, minimal, secure. It can build reproducibly. It doesn't require a Docker file or installing Docker to get started.
So that really reduces a lot of the barriers to entry, and it also throws people down the pit of success, right? Where they'd have to work really hard to have a bad container configuration, or a slow build, or insecure runtime. That's kind of core and the first building block of what we're trying to deliver for the Cloud Code experience. Take a project, whether it's for Node or for Python, magically containerized that using the best possible practices, and make it really easy and efficient to iterate on it.
CRAIG BOX: OK. I've got Cloud Code installed in my IDE of choice. What do I do next? How am I interacting with it?
SARAH D'ANGELO: There's a lot of ways you can get started using Cloud Code. One of the things we wanted to do is make it easy for people who are new to Kubernetes to get started. One of the things I like to do is use all of our template applications. We support Java, Go, Python, Node, and .NET. And we're continuing to add new templates to those to help people grow and expand with their development.
So you can see how quickly you can get an application up and running. You can do things like create a cluster from the IDE, deploy, and view your Kubernetes resources. So we try to make it easy to get all that information in there. So honestly, it's a great way to start playing around with Kubernetes in your IDE. And if you're looking for more complex things, you can start to expand. We've got great tool tips for editing YAML, and as Patrick mentioned, debugging your code all easily within your IDE.
CRAIG BOX: Well, I'd like to dig into that a little bit. If you write code and then you're using the Skaffold engine, I understand, to build that into the container and have it running, in the previous world, I would basically have a debugger which just attaches to the process that's running. But now I have the indirection of having to communicate over some kind of network. It might be Kubernetes running locally, but it could equally be a connection to a remote cluster. How does the debugging process work?
PATRICK FLYNN: To effectively debug a Java application-- I'm a Java developer, so I'm going to default to that example. If you have a Jib container, it's using Distroless, which is a base runtime that has no debugging tools, not even a shell. And the entry point configured in the image is, let's say, java -JAR --whatever JAR you have-- but without any of the debug flags. Now, if you want to actually run that container and be able to debug it, you're going to need to modify the entry point so that it actually includes the necessary Java debug flags.
Now, what we will do for you-- what Skaffold does for you with the debug command-- is that it'll go ahead and mutate your Kubernetes resources invisibly in the backend, change your entry point for runtimes that need additional debugger agents, configure that on a volume and mount it into the container, and then just run the container so that it becomes debuggable. And then Skaffold will port-forward the debug report for you so that once the IDE is ready to connect to that port, it all just works.
CRAIG BOX: And I guess I can just right click on the app in the IDE and say debug, and it does all this behind the scenes for me?
PATRICK FLYNN: Exactly.
CRAIG BOX: Last week, IBM announced a tool called Codewind, and Microsoft also have extensions for Visual Studio Code. So obviously, there are a lot of people who are looking to solve things in this space. Is this a collaboration, or is this something that we think all vendors will probably end up building their own thing?
PATRICK FLYNN: I don't know the answer to that yet. I think that it could eventually become a collaboration. We're definitely open to that. But for now, it's such a new space. And if you talk to users-- you mentioned earlier about, what are the opinions of users on this? Many of them just don't know, right? They don't really know how things should work. They just want it to work well and quickly. So we're really in this experimental phase where we're building out tooling and seeing whether that satisfies their need.
So we just felt we needed to move fast. There was pressing issues with existing customers who were trying to target Kubernetes and feel like it's really a usability regression from their existing platforms. So we're trying to tackle that. And then I feel like once we have a clear view of where that's headed and we really understand what the other players in the ecosystem are doing, it might make sense to collaborate at that point.
CRAIG BOX: And, Sarah, as a user experience researcher, how do you balance needing to not sell a faster horse to these people and get them using tooling that just looks like what they're used to versus developers who are trying to say, hey, here's a brand new paradigm which might be scary for the people who aren't used to those new things?
SARAH D'ANGELO: So it's a really interesting problem, because we want to be in this innovative space and providing new solutions, but we also want to work well with developers' existing practices. And I think a lot of what we do can often be hidden in the workflow. So if developers aren't noticing Cloud Code, but they're working more effectively, that's great for us. So we do a lot of that with the YAML auto completes and tooltips-- so some of these subtler things that can be easily integrated into developers' workflows without having to change too much.
So we're really mindful of what people are currently doing, and trying to meet those needs where they have them, as well as putting in new innovations and incorporating our GCP services to try to take people to where they want to go. So it's a lot of give and take, to be respectful of the current process while exploring these new features that everyone in the Kubernetes community is looking for. So it's a balancing act for sure.
CRAIG BOX: Well, that might be worth digging into, because many people will have worked in development, or at least be familiar with a day in the life of a developer. You are user experience researcher, and that's possibly not a thing that people are as familiar with. What exactly is a day in the life of a user experience researcher?
SARAH D'ANGELO: Being a user experience researcher, I'm not a developer. But I work with developers, and I'm trying to understand developers. So as a user experience researcher, a day in the life for me is to work closely with our product teams-- Cloud Code for me-- and talk to our users. I spend a lot of time talking to Kubernetes developers asking what they need from us, what are their current pain points, and trying to identify opportunities for future growth.
It's my job to be the voice of the users. And we want to make sure that we're building tools that are usable, useful, enjoyable. So we really focus on communicating with our users through various methods. So hopefully some people listening in have maybe talked to me in an interview or filled out one of our surveys. We take those all into consideration when we make and prioritize our decisions for Cloud Code and future directions of our products. So I spend a lot of time talking to people.
CRAIG BOX: I think of your team as the people who give away the free Androids at KubeCon.
SARAH D'ANGELO: Yeah, I have definitely been one of those people. I hope you have an Android from me. And we hope to get people to talk to us, because we can understand from our perspective, it's hard to know what everybody needs. We only know what people tell us, and so yes, we're always happy to give out a free Android if you want to give us some suggestions.
CRAIG BOX: Do you think this is a role that only exists at big company scale? Do small companies have user experience researchers, or is it a Google only thing?
SARAH D'ANGELO: It's definitely not a Google only thing. I think it's obviously very different at smaller companies versus larger companies like Google, I think in some of the same ways that being an engineer at larger companies is different than smaller companies. We're working on a wide range of products for millions of users, and so we have these unique opportunities to challenge ourselves in new areas and contribute to decisions that influence millions of users.
And being at a larger company gives us access to a very large UX community. So at Google, we have UX designers, engineers, writers, and managers. And so we all collaborate and grow, whereas maybe at smaller companies, it's not quite so large. But UX is definitely critical to any new company who wants to understand their users and keep the user in mind.
CRAIG BOX: How does the user experience role differ when you're working on a hosted product versus a product that a user installs themselves like Cloud Code?
SARAH D'ANGELO: So I think that touches back on the question you asked earlier about the considerations we make for the IDE. And so I think our hosted products, we can control a lot of the experience, whereas Cloud Code and some of our other installable products, we don't always know exactly what's going on on the user end.
And so we have to do a lot of work for me to understand our user workflows and what their goals are so that we can fit into their environments. We don't always know what else is going on in that space, so we have to be mindful of what they're trying to accomplish and how we can easily fit in. So there's more unknowns. There's more constraints. But there's also a lot of new and exciting challenges to be flexible and adapt to new environments.
CRAIG BOX: Given that Cloud Code is a reasonably new project, where would you like to see it go?
PATRICK FLYNN: Currently, we're really running with this pretty deeply held intuition that the local inner developer loop is where we can really provide a lot of value to people who are experiencing pain targeting Kubernetes. And it's not just intuition, Sarah and her team spend a lot of time talking to people. We know that people are frustrated with the slow iterations and the configuration elements involved in debugging.
And now that we have an experience, we really want to polish that and deliver as much value in that loop as possible, and really also build up just a user base so that we can start getting information-- getting feedback from people about where they would like us to spend time to make Kubernetes and also GCP the best developer experience possible.
One area that we're looking at is we really care about the day two and the day three developers, where they're not just kicking the tires or iterating on development, but they want help to take their small application to production in a way that makes sense in the modern software era, right? With continuous integration, with code reviews, with possibly CD, but some kind of delivery tool. And we're looking at how the IDE and potentially the CLI tools that we're developing can help people there.
CRAIG BOX: And Sarah?
SARAH D'ANGELO: Yeah, I'm really excited to see where Cloud Code goes next for all the reasons that Patrick mentioned. And I'm really hoping that people will come out and use it, and then come talk to me about your experience. Do let me know. As Patrick mentioned, that is something that we really take seriously. So I hope that more people start to use it and give us great feedback so then we can continue to grow and expand, and contribute to this ever-evolving ecosystem and make great tools that people love.
CRAIG BOX: Does Cloud Code have a facility for people to give feedback from within the tool?
SARAH D'ANGELO: Yes. So from within your IDE, you can submit feedback as issues on our GitHub page, which we're always checking and reading, as well as-- occasionally, you might be prompted for a survey, which you can provide feedback through the survey, as well as reach out for an interview with me. So we've got all of those options.
CRAIG BOX: All right. Thank you very much, both, for joining us today.
PATRICK FLYNN: Thank you.
SARAH D'ANGELO: Thank you.
CRAIG BOX: You can learn more about Cloud Code at cloud.google.com/code, and you can find the links for the GitHub pages in the show notes.
[MUSIC PLAYING]
CRAIG BOX: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod -- a must-follow! -- or reach us by email at kubernetespodcast@google.com.
ADAM GLICK: You can also check out our website at kubernetespodcast.com, where you can find transcripts and show notes. Until next time, take care.
CRAIG BOX: See you next week.
[MUSIC PLAYING]