#51 April 30, 2019
Gabe Jaynes is a DevOps Architect at KeyBank, an American retail bank. KeyBank were an early adopter of containers, and Gabe talks about the reasons they undertook this transformation. Craig and Adam also celebrate our first birthday and spoil the concept of spoilers.
Please say hello and 🎂🎁!
ADAM GLICK: Hi, and welcome to the Kubernetes Podcast from Google. I'm Adam Glick.
CRAIG BOX: And I'm Craig Box.
[MUSIC PLAYING]
Can we assume everyone listening has seen "Avengers: Endgame" already?
ADAM GLICK: Certainly not!
CRAIG BOX: I made sure to see it early because I didn't want anybody spoiling it for me. I mean, I'd be heartbroken if, before watching the movie, I'd learned that [CENSOR BLEEP]
ADAM GLICK: I said, no spoilers!
CRAIG BOX: Oh, not a spoiler, though make sure you do a Google search for Thanos and click the little gauntlet icon in the top right-hand corner, the little info box. It's a little Easter egg. But you probably do want to make sure you've seen the movie first, because you never know what you'll find spoiled.
ADAM GLICK: You should have seen "Infinity War" in order to click on that.
CRAIG BOX: It's getting to the point where you pretty much have to have seen all 22 preceding movies to understand everything that they want you to.
[ADAM CHUCKLING]
ADAM GLICK: Fair enough. I will say that when I watched "Infinity War," there were certain characters that they never even said the name of the character. They just assumed you knew who they were and what their power was. And I felt a little lost, as I've probably only seen about half of the movies.
Speaking of spoilers, I want to mention that I'm really enjoying the latest season of "Game of Thrones." But I will leave it at that until it completes so as not to ruin that for anyone.
CRAIG BOX: I still haven't seen it from the beginning. So you may have to wait a lot longer than that.
[ADAM CHUCKLING]
Played any new games lately?
ADAM GLICK: I have. One I really enjoy, it's called Gorogoa. It is everything that I love about creative and artistic puzzle games that's out there. It really shows what some great indie developers are putting out there. So if you like things like "Framed" or "Monument Valley," some of these just kind of creative and different ways to approach a mobile game, I've really found that it's been enjoyable, and something that's really been a fun, artistic, and creative way to enjoy a game on a phone.
CRAIG BOX: And looking on Twitter, I see you've had some good conversations this weekend.
[ADAM CHUCKLING]
ADAM GLICK: Ah, yes, we were wandering through the local park where we moved, and it's always full of some really interesting people. We really love it.
But there was this one gentleman that I just love. We've seen him there a couple of weeks. And he just has a giant sign that just feels so Seattle. He has a big whiteboard that says, "I desire a conversation. Will you talk to me?"
And he's sitting there in a tie dye shirt, and hat, and sunglasses. He has this purple towel on his lap, and he is hand-feeding a squirrel while sitting there and chatting with folks. And I just was like, I love this guy.
CRAIG BOX: Was the sign written in human or in squirrel?
ADAM GLICK: It was written in English.
CRAIG BOX: Got some very smart squirrels in Seattle, then, obviously.
[ADAM CHUCKLING]
ADAM GLICK: Indeed. Well, you know, we are a big reader city. So the library is right there, and you know, everyone's reading. Also I wanted to call out that it was one year ago today that we launched the first episode of this podcast. And we wanted to offer our heartfelt thanks to everyone who has supported us with the podcasts, from our guests to especially you all, our audience, for tuning in each week.
We love doing this podcast, we love meeting all of you at events, and we look forward to the next year as we continue to bring you news and interviews from across the cloud-native world.
CRAIG BOX: Yes, thank you. Please come up and say hello if you see us at KubeCon EU. And if you don't follow us yet on Twitter, please pop on and check out @KubernetesPod. We'd love it if you dropped us a line to say hello. We love hearing from you all, and we want to have the chance to thank you individually for sharing a few moments of each week with us.
ADAM GLICK: Let's get to the news.
[MUSIC PLAYING]
Docker announced on Monday that on Thursday, April 25, 190,000 accounts' data had been hacked. Usernames and hashed passwords were taken, as well as API tokens for GitHub and BitBucket if they were attached to the account for auto-builds. Docker is asking users to change their Docker Hub password, as well as any other accounts that shared the same password. If you had auto-build set up, Docker has revoked those tokens, so your auto-builds will fail till new tokens are created and shared. Note-- to do this, you may have to unlink and relink your accounts.
Since this only impacted some Docker Hub users, Docker has said that they have sent password reset emails to all impacted users. So you should check your email to see if you've been impacted. Docker also posted the slightly confusing statement that you don't need to change your password unless you were emailed, but that changing your password isn't a bad idea. Given that advice, it might make sense to update your password out of an abundance of caution.
CRAIG BOX: Rancher Labs has released an operating system to accompany their K3S-- that's K-3-S-- Kubernetes distribution. They have again named the project, spelled K-3-O-S, with no pronunciation guide. So while Adam wanted to call it "KeyOS," I'm going with "Chaos," as if it was from "Get Smart." K3OS is based on the Ubuntu kernel, with tools added from Alpine Linux. It also has K3S built in, so patching your OS and Kubernetes happens in one step.
Rancher says that this is targeted at fast-startup application settings like edge computing. K3OS is a slimmed-down OS, conceptually similar to things like Google's Container OS and CoreOS, which Red Hat bought last year. It's also a little like Rancher OS, which it seems like they might have forgotten that they make.
ADAM GLICK: One cluster, or many? It's easy to communicate within a cluster, but cross-cluster boundaries can be hard. A service mesh can be a solution, but it sometimes raises more questions about how to architect your network and where to put your fault domains. Andrew Jenkins, the CTO of Aspen Mesh, talks about extending Istio across clusters in an article on InfoQ looking at various trade-offs when running in a multi-cluster environment. No one answer is suited to all use cases, but if you can identify your requirements and trace a line with your finger, he ends with a table of options.
CRAIG BOX: The Google Cloud Security Team had a very busy Next conference, launching a variety of new features. Product manager and episode 8 guest Maya Kaczorowski, along with Anne Bertucio, has written a summary of them all, including enhancements to identity, secrets, software supply chain, container isolation, and OS hardening. If you didn't have a chance to get to Next and you have a passing interest in container security, you should check it out.
ADAM GLICK: The Linux Foundation is looking for people to help localize the Kubernetes docs. The community has been expanding the documentation to additional languages over the past year, with partial documentation now available in nine languages. There are more languages being added, like Hindi. So if you are multilingual and would like to help support one of the existing languages or add a new one, the folks at SIG Docs would love to hear from you.
If you're interested in the history of docs localization, take a trip all the way back to episode 5. But be warned, we weren't anywhere near as slick back then.
CRAIG BOX: If your applications depend on SSL or TLS to the point where hardware acceleration is under consideration, Mikko Ylinen from Intel has a deal for you. On the Kubernetes blog, he explains how a combination of the device plugin and runtime class features, both beta and 114, can enable this use case. Device plugin is used to register the hardware accelerator on a node with Kubernetes so you can schedule work to nodes with their hardware. Runtime class is used because you need to use KataContainers to virtualize that hardware and pass it through to your container.
Mikko suggests that SSL termination for ingress is the most likely reason to use this. An example is given for HA proxy, which uses open SSL, and work is being done to enable Envoy, which uses Google's boring SSL fork to do the same.
ADAM GLICK: The CNCF is rebranding the EmpowHER networking event to be EmpowerUs. This is an event being held at KubeCon EU, and focuses on female and non-binary attendees by providing networking opportunities and hors d'oeuvres. The event will be held Monday, May 20, from 6 to 8 PM at the Hotel Porta Fira. Registration is now open if you're interested in attending.
CRAIG BOX: And that's the news.
[MUSIC PLAYING]
Gabe Jaynes is a DevOps architect with KeyBank. Welcome to the show, Gabe.
GABE JAYNES: Thank you. Thanks for having me.
CRAIG BOX: Can we start by asking what exactly is KeyBank?
GABE JAYNES: So Key is a 200-year-old bank that has gone through many mergers and acquisitions over the years. We have a wide footprint across the country, from Alaska, but based in Cleveland. We have about 1,200 branches. And we're a top-20 in deposits US bank. So we like to make the joke that were the biggest bank no one's ever heard of.
CRAIG BOX: And so your customers are customers as opposed to financial services for businesses, or...?
GABE JAYNES: We do a bit of both. So a lot of what we do is in the consumer space. We have a large online banking for those consumers, as well as a treasury service for our corporate customers. And both of those we actually host on the front ends. All the client stuff is hosted on Kubernetes within containers.
CRAIG BOX: Wow. So you mentioned before you're a 200-year-old company. So that was obviously pre-internet. What was the journey like for you? First of all, like, when did KeyBank start adopting digital banking?
GABE JAYNES: It definitely goes back to the early ages of the internet, where we had a very simple web page. And actually there was a sort of Max Headroom character that someone was trying to develop, as the guidepost for your digital experience with KeyBank.
CRAIG BOX: The Clippy of money.
GABE JAYNES: Yes, indeed. And it didn't last long from what I can see from the wayback machine of the internet. But we've developed from there. We've had several internet banking solutions that we've developed in-house. But back in 2015, it was realized that we were kind of behind our peers, from studies showing the quality of it, et cetera. So we realized we had to rewrite the entire thing as a new modern web app, using modern technologies.
CRAIG BOX: Are you regulated to the same degree that the large banks and the sort of industrial finance sector is?
GABE JAYNES: Yes. So many of the same regulations, which is always an interesting topic, and always an interesting way to shut down progress. But we find ways to kind of work with our regulators, work with our audit and legal teams, and make sure that we are crossing all the T's, dotting all the I's, and we're ahead of the game in terms of security and our ability to deliver reliably.
CRAIG BOX: So the genesis of your Kubernetes story goes back to 2015. What happened then?
GABE JAYNES: We were looking to rewrite our web and mobile apps. And what started to happen is, we were going to acquire another bank, and it was a major acquisition for us. We realized that the timeline for that development was too far in such a way that our new customers were going to have to convert experiences twice-- once to KeyBank, and then once to KeyBank new experience.
We realized we didn't want that. So the project was called Digital '17. We were going to deliver in the middle of 2017. And all of a sudden, we had an acquisition that was going to be customer day 1, October 2016. So that customer day 1 was a big date for us. If we couldn't deliver long before that and convert customers over internally, we weren't going to be able to deliver a new experience at all, and we were going to shut down the project.
So we were in the architecture space at the time, drawing pretty pictures in Visio. And we realized we needed to roll our sleeves up and actually deliver something that could help accelerate this experience, accelerate the development environment, move away from the waterfall model of, every week or two you get into a testing environment and then you see all the defects you've introduced a week or two before.
CRAIG BOX: I'm glad you could do it within two weeks.
GABE JAYNES: Yes, indeed. So it was a very manual process. It was a very laborious process for the development and product delivery teams. And we wanted to make sure that that was a better, smoother, faster, automated experience in every way.
CRAIG BOX: With the way everything changes, sometimes it's hard to remember what things were like four years ago.
GABE JAYNES: Indeed.
CRAIG BOX: What was the state of the cloud at that time, to you?
GABE JAYNES: So for us, cloud was an aspirational goal. We definitely like the idea of using someone else's automation and someone else's resources, and getting that frictionless "if I need a server, I get a server." But at the same time, we were looking a little bit deeper than that and realizing that the world of Hypervisor, the world of VMs and building heavy platforms on top of that, and then patching and maintaining those wasn't necessarily something that was going to make us faster and a differentiator in the banking space with our peers.
CRAIG BOX: You were running virtual machines on-premises at this time?
GABE JAYNES: Yes.
CRAIG BOX: Right. And the automations that you were looking at then, what kind of technologies were you evaluating in order to make this modernization?
GABE JAYNES: There were a large number of cloud orchestration tools that we had looked at from any number of vendors. And we continued to come back to the idea that the orchestration that they were trying to build, a template that would deliver a three-tier architecture, et cetera wasn't really fitting our needs. Most of our apps are very custom, very niche apps, a lot of Windows things that are meant for more workgroup sizes, but we build them into the various lines of business that we have.
So looking at that, it was too varied to say, this template fits all of our apps. We realized we needed to be more custom. Or, for things we developed internally, we wanted to make sure we had the paved road of the fastest, best delivery that we could provide. And that's really what led us to containers and trying to find something to orchestrate container so we could run them in production.
CRAIG BOX: All right. And so what was the evaluation like for tooling available at the time in the container space?
GABE JAYNES: At the time, the way we looked at it, we looked at what Mesos was offering, what Docker Swarm was offering, what Kubernetes was offering, and we spent a little time with Pivotal and what they were bringing to the table. We realized at that time-- and this is December 2015, I'd say-- that Kubernetes was ahead of the game in what they could deliver in the enterprise space specifically for the needs that we thought we had at the time.
We learned very quickly what we didn't know, and we learned very quickly what we'd need to solve for if we were to build this system.
CRAIG BOX: Well, let's dig into those. What didn't you know?
GABE JAYNES: One of the things, when we started to look at what it would take to run containers in production and orchestrate them with Kubernetes, there were a lot of blanks in what should be our on-prem registry, what should be our networking layer, what's our software-defined network, what's the ingress tier, are we going to use F5, and is that mature yet? Are we running something like HAProxy, or are we running something else?
So there were any number of decisions to be made. And we realized if we were going to do this alone, we were going to have to do a lot of testing. If we're going to roll up our sleeves and really do this ourselves, and roll our own, it was going to be troublesome and it was going to take a lot of time just to pick the right technologies.
We were OK with that, to iterate through it, and deliver constantly with kind of abstracting it from our development community. They don't need to know the secret sauce underneath as long as it's working and running their code and they can get to their services. But we thought we were going to spend a lot of time on that, and that was daunting at that time.
CRAIG BOX: And so what things did you have to build?
GABE JAYNES: The biggest thing that we built wasn't even the Kubernetes side. We ended up partnering with Red Hat for that. Their open-ship product at the time was very ahead in the enterprise space. It solved a lot of those boxes for it.
CRAIG BOX: Great.
GABE JAYNES: I think, at this point in time, it's more of a commodity. A lot of those things are very well understood. And you still have a number of choices, but the choices have a lot more meaning behind them in that you're trying to accomplish a specific task. When we were looking at it at that point, we really had no idea one networking layer to another. So it was nice to have a platform meant for enterprise that made some decisions for us and that allowed us to accelerate past making those decisions ourselves.
So the majority of the time we spent was actually outside the Kubernetes space, really in the CI/CD pipeline. A lot of building out the automated testing, building out Jenkins to do all of our Swiss Army knife of build and all that other stuff. We had looked at a release platform so that we could build and release in an enterprise production way, very similar to what we were trying to do with the rest of our code management and deploy. And we actually jettisoned that entire idea because we found it wasn't necessary for the kind of bespoke pipeline that we were creating for internet banking. We did everything in Jenkins.
So it was nice that we could kind of pare down what we needed to attack. The attack surface is now figure out pipelines, figure out automated testing, and figure out delivery in containers, and we're going to forget about everything else for right now. So it allowed us to really quickly move, and I think probably saved us at least three months of going back and forth and deciding if we had the right technology stack. As much as I love talking technology stacks and putting slides together with all sorts of logos on it and saying, this is what we do for everything, that conversation can change every day if you want it to. There's too much in this space not to.
CRAIG BOX: Do you think that the situation is better today, especially in the open-source world, given that the vendors put together a happy path and say, use these tools? There's so much choice. You've mentioned, say, network layers. There are still Flannel, and Calico, and Cilium. There are options out there. Do you feel that it's overwhelming?
GABE JAYNES: It is slightly overwhelming to understand what the differentiators behind them are. I think the nice thing for us is, every enterprise is different. Every enterprise has a different set of hardware and software. Whether it's the routing tiers, switching tier, the load-balanced tier, or the firewall tier, all of these things are different.
So looking at how well a product is malleable enough that it can interact with all of those, and someone's created the plugin that goes out there and manages your load balancer or whatever it happens to be, that's helpful. But it's very helpful when a company comes along to help me with a product, and they make some of those decisions, and they really at least setup kind of the guardrails so I don't have to look at all of the world if I don't want to. But at the same time, if I really need to for some reason, I can go off the paved road and find my own thing.
CRAIG BOX: Kubernetes obviously makes that very easy in that it supports plugins of disks, and storage, and networking, and so on. And then the abstractions that make that possible-- we now have the CNI, in the CRI, and the CSI Miami and so on.
[GABE CHUCKLING]
All of those things-- do you feel that that has paved the way for the integrations with the existing systems that you had, made this possible?
GABE JAYNES: Yeah, definitely. That's one of the best things about this ecosystem is, once Kubernetes had been well accepted by every major cloud, every major vendor, that kind of accelerated, and now it's very mature. Versus where we were when we started, which was trying to figure out what was mature and what wasn't. Is this ready for prime time? Can we put this in production, and can we run our bank, the most critical app, the most accessed way our bank is seen by customers?
Is this good enough for that? And am I going to be able to tell my executives, and our CEO, and our board that we're going to be OK with this because we've chosen, at the time, for an enterprise, cutting-edge technology. And we're delivering our most critical project on it.
CRAIG BOX: I wanted to ask about that. Because you're talking about starting this project in 2015 and launching 2016-2017. So that is in the first two or three years of Kubernetes' existence, let alone its maturity. What was the conversation like with the senior executives?
GABE JAYNES: We went to the product team and the product delivery manager for that internet banking experience. We decided that we could help accelerate them, and we sold them by saying, look, if you check in code today, I can deploy it a minute later. Now, give me another few minutes, and I'll run a bunch of tests against it. If you write those tests, we'll do a bunch of automated testing. And then we'll do some end-to-end testing if you want.
CRAIG BOX: What will they do for the rest of the two weeks?
GABE JAYNES: It's an interesting question. Apparently, they'll code more and deliver faster.
CRAIG BOX: Brilliant.
GABE JAYNES: So that was what was exciting our delivery team. They were very interested in what we were selling them, which was we'll accelerate everything you do and we'll give you instant feedback, rather than waiting a few weeks to see what bugs and defects you'd introduced into the codebase.
CRAIG BOX: Do you have a metric for the performance improvement or the turnaround productivity?
GABE JAYNES: We do have some metrics. I don't have one of my PR slides in front of me. We're very good at keeping those numbers and having a slide deck that kind of talks through what we've done. But we were going from a lot of manual testing that would happen every several weeks-- and that's hundreds upon hundreds of tests in a Word document that just lists, do this, then this, then this, and see what happens-- to running tens of thousands of tests every hour, basically. We're now to running millions of tests a day against all of the projects that are delivering through these pipelines.
CRAIG BOX: What's an example of one of those tests?
GABE JAYNES: Simple login tests. When we initially delivered this, we had some interesting issues with the login flow. So we continually added testing around that, forgot password, and really simple things that it seems like a simple service, but because of the way users interact with it, we needed to make sure it was foolproof. And especially as we were acquiring another bank and we were going to have hundreds of thousands of new users trying to log in and use all of this new functionality, it needed to be bulletproof.
So we wrote a lot of testing around, here all the scenarios we can come up with that test the login service, the forgot password service, all that stuff. That doesn't sound that interesting, but it was incredibly resourceful for us. It paid off a lot to be able to say, every time you change or modify the service, we're going to run a bunch of tests to make sure you didn't break it.
CRAIG BOX: So you mentioned, again, 200-year-old bank. I'm going to assume you've moved on from storing data on parchment or clay tablets or something. There's going to be some sort of mainframe behind this.
GABE JAYNES: Indeed. Like many of the fine banks of this nation and others, we have a mainframe.
CRAIG BOX: Is it all written in Fortran?
GABE JAYNES: It is a lot of COBOL, it is a lot of JCL, it's a lot of things that I don't usually get my hands on. But we definitely interface with the mainframe every day. We have a very strong relationship, a very reliable system that has been doing what it's been doing for many, many years. So moving away from that for any bank is a huge journey. And really, I wouldn't even say that mainframe is slower to deliver on. We just kind of abstract away from it with what we're doing in the services layers, and add all the functionality there.
So we have a bit of a strangler pattern, where we look at the systems that do not change on mainframe. When we add new features, we add those in the distributed space. And now, really, we're doing a lot of that in containers. So we're moving pretty quickly to abstract away from that so that we can be more agile around the mainframe without having to have very great depth in delivery on mainframe.
CRAIG BOX: Mainframes are an area that I'm not going to say I'm hugely familiar with. But the systems, to start with, we're talking VT220 terminals and so on. Do we get to a point where we are able to interact with them today in modern RESTful ways? Or do you have to implement that yourself in order to be able to build your cloud-native systems to connect to that?
GABE JAYNES: Specifically the way we interface with it, we wrote a service layer on top of mainframe that we interact with all of our main transactions, therefore, interact with the back end data, the back end systems that run on mainframe. So we have several layers of abstraction. And even the main layer of abstraction is something custom-written at Key something like 20 years ago.
CRAIG BOX: So you've extended your own mainframe system to be able to talk in a way that the Kubernetes environment's able to connect to.
GABE JAYNES: Yeah.
CRAIG BOX: How do you test and validate data across those two worlds? Like, you're talking real customer data here. Are you able to run tests that validate the-- that works end to end with a mainframe as well?
GABE JAYNES: Yeah. I think, really, all we're doing there is mainframe becomes more of a database. It's the data source, it's the truth. Everything that we look for, that data is there. So we load that with the appropriate data to test against. And then we're running the same transaction. So it just becomes another layer in the whole testing flow. We don't do anything very specifically mainframe-focused because we're looking at it as a service as a whole.
CRAIG BOX: Now you mentioned that this is for an acquisition. And we're talking about online banking, which is very modern, very web-facing technology. Are there other legacy technologies inside the bank that you're saying this platform would be really good for and thinking about migrating?
GABE JAYNES: We definitely have taken some things you wouldn't normally containerize, and run them with Kubernetes orchestrating them.
CRAIG BOX: Some examples?
GABE JAYNES: I think one of the most interesting ones-- we had some legacy code that calls directly to the IBM CICS transaction gateway. So we've got this IBM CICS service running on mainframe, and we have a client that connects to that. So it's not a very modern tool, and it wants to be root on any system it runs on, et cetera. So it has some interesting peculiarities that running in a container wouldn't normally allow for. So we break some security model and run it in a contain, but extra-permission-having space.
CRAIG BOX: OK.
[GABE CHUCKLING]
And you're comfortable running that in the same environment that--
GABE JAYNES: Looking at the controls that are in place around it and what we've been able to do from a security management standpoint, we think we have that under control. But it's something we constantly review.
CRAIG BOX: Do you have a security team that you work with?
GABE JAYNES: Yeah, and I think that was actually-- as we started to build this out, we contacted them very early. Containers are very new, this is something that in the security space not a lot of people have had their hands on, they've probably read about it at the time. And we wanted to educate them very quickly, because we didn't want to get to the 11th hour to have them say, yeah, we can actually run that in production. We don't approve any of this, we trust it, et cetera.
So we've definitely gone through that. We've run through PCI and some other things, and really tried to educate them on why this is safe, why it is isolated, why it allows us to do what we do, and why doing things in software is OK, versus having specific hardware boundaries.
CRAIG BOX: You've been a design partner for GKE On-Prem. How did that come about?
GABE JAYNES: We've been very interested in the space, being in the container space. We love what Kubernetes has allowed us to do. So we're always looking for what's next in this. We have a number of ways we want to get to the cloud. And that's a journey that we've been on for a long time, and haven't necessarily executed on, because there's no workload that's made sense to this point.
But I think we were looking for an easy gateway to connect us and allow us to run workload anywhere. That's kind of the dream of hybrid cloud that everyone has. And we have ways to do that. There's nothing that keeps us from doing that with any platform that we have. But when Google came out and said we'll be able to manage multiple clusters from the same interface, whether it's on-prem, or in this cloud, and now in that cloud and the other, that's very interesting to us. Because that takes away a lot of the management overhead and a lot of the design overhead we'd have to go through otherwise. So very inviting to us to see that and to say we can be a part of that journey.
CRAIG BOX: And what has the journey been like in terms of, for example, recommendations that you've made back to our engineering teams to improve things?
GABE JAYNES: I think it's interesting to see a product like this come about. Because what works in a lab versus what customers are doing in their enterprise, it's infinitely varied, and we're no different. We check all the boxes, we have all the right prerequisites, but the way we usually set it up is different from everybody else.
So it's been interesting to kind of feedback and say, OK, we do networking like this, we expect our F5 device to be the default gateway. And you did not see that coming. You just thought it was a load balancer. So interesting things like that force us to look at the way we do most of our regular standard networking setups, and then feed that back and say, hey, here's a scenario you might not have seen before.
CRAIG BOX: I think one of the things with cloud is there's sort of an assumption that you will do things in the way in the best practice. And I think that is, in large part, because people are able to build new systems. When you're building something which effectively needs to slot into an existing environment, just the combinations, you think they multiply out?
GABE JAYNES: Yeah. And it's interesting to see just the networking space, specifically. You go and look at the way clouds work, and it's a very flat network. And then we invented VPCs so we could make it look like your data center. We look at that and go, oh, cool, yeah, we want a bunch of zones we're going to create, because that's how we do perimeter control of networking. But that means I'm tromboning all this traffic out to a firewall to come right back to a service that may sit right next to it, physically.
So that is definitely something that we're educating ourselves on. And that's partly our network team, that's our security team, that's everyone coming together and say, this is what we want to look like. We want to look like how the cloud's run. Because if Google, and Amazon, and Microsoft aren't doing it, we probably shouldn't be doing it in our data center.
So there's definitely a shift where we have to look at our core patterns and the way we develop, and design, and architect solutions, and come back to the table and say, well, this is really the modern way to do it. And interesting projects like Istio, and service meshes-- Consul and others, Linkerd-- allow us to think about that even further, and drive it further into software, so that as long as a network is delivering my packets and it's doing all the plumbing, I can do that at a higher tier of controlling how the networking works.
Certainly, micro-segmentation comes up all the time as we talk to our security partners. And there are numerous ways to solve it. But as we look at what we're able to do coming out of the container space as kind of the genesis for this, it is allowing us to move a lot faster and solve these problems a lot better of a place where it's closer to the application and the application logic than otherwise.
CRAIG BOX: I've heard of the use case for Istio explicitly, with a financial services customer who heads a VPC with 200 IPs that needs to have all the secure services running on it. And then that's full. And they've got so many IPs available in the different zone, but they're unable to put things there because the security is inherently tied with where you are in this pre-segmented network. And being able to move the identity to the application layer and effectively make the network flat, again, for the audience, Gabe, sort of opened his eyes wide at that point. I think that's something that sounds like you'd be interested in.
GABE JAYNES: Yeah, that is definitely one of the most exciting things I'm seeing in the space now. And certainly with Anthos and what we've been doing there, it's allowing us to play with that. where we weren't doing that in our other environment because it's kind of set in stone, we run it this way.
So it's been very cool to at least have access to that and start to work on use cases where we can really change the way our networking team and security teams are looking at this with us. Because if we can drive it back to the app team's ownership, it makes everybody move faster. It makes everything more visible, more traceable, and a lot more self-service, which is something we want to drive out through everything in the enterprise.
CRAIG BOX: Could you see a world where the service mesh extends beyond your Kubernetes environments and connects together real legacy workloads?
GABE JAYNES: Definitely. I would love to see that come about as we start to build it out. We're very much at the beginning of that part of the journey. It extends deeper than just running applications, especially stateless applications as we do now. But we want to see that extend out to everything that we're doing. There are a number of ways to put that control layer in there.
But I think, at this point, we're along with Istio, and we want to see how that goes. We're looking at several other products to do that-- or other tools, I should say. But the way Google's delivering GKE on-prem Anthos with Istio built in, it allows us to play with that and kick the tires very quickly.
CRAIG BOX: What other projects or products in the ecosystem excite you right now?
GABE JAYNES: I think the way the operator framework is coming about. We have an interest in running database orchestrated by Kubernetes. It doesn't sound like a good idea, but it's something we want to at least try out and see if it helps us move faster, specifically in the development environments. And then moving forward from there, we want to be able to deliver database as just another resource service, on-prem. We don't have that capability built out today anywhere else, and we thought it'd be great to prototype it just in containers, and then provide high availability and some other things using operator framework and see if that actually provides us some value.
CRAIG BOX: And what do you think that the next step is for the digital transformation of KeyBank?
GABE JAYNES: I think scaling faster is going to be-- as we extend out into more and more development teams, taking advantage of what we've built, our pipelines, our automated testing, our containerized environments, we want to see people move faster with that, deliver faster. We say CI/CD, but our CD isn't as continuous as it might sound. That's something we're building up so that we can deliver faster when everyone is ready to do it, and start taking out some of the roadblocks and some of the manual checkpoints that we still have in a number of our applications.
So really it's scaling out to the rest of the enterprise so everyone can benefit from the speed and the agility we're delivering, and then making sure we do it in a very maintainable way, which is somewhat hard to do for us. We built something that was very focused on one set of apps. And then saying, well, here's the next 100 apps that want to go down this path as well, we didn't plan for that incredibly well. And it's been really fun to solve some of those challenges.
CRAIG BOX: From an enterprise perspective, if there's one thing you could change about anything in the cloud-native ecosystem, what would it be?
GABE JAYNES: There's too many choices, as we've already talked about. That makes it very hard to do the day job of keeping up with the enterprise work that I do, and then look around and learn what's replacing this thing that I think is sacred to me today.
CRAIG BOX: You need someone to snap their fingers and remove half of every open source project out there.
GABE JAYNES: Yeah. That sounds so bad, because the vibrancy of the ecosystem is great. But at the same time, it pains me every day.
[GABE CHUCKLING]
CRAIG BOX: All right, Gabe, thank you so much for joining us today.
GABE JAYNES: Thank you very much.
CRAIG BOX: You can find Gabe in the basement of a KeyBank office somewhere in Cleveland, Ohio. And you can find KeyBank on the web at KeyBank.com or at one of 1,200 branches around the USA.
[MUSIC PLAYING]
ADAM GLICK: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter, @KubernetesPod, or reach us by email at kubernetespodcast@google.com.
CRAIG BOX: You can always check out our website at kubernetespodcast.com, where you can find show notes and transcripts. Until next time, thank you for listening and take care.
ADAM GLICK: Catch you next week.
[MUSIC PLAYING]