#48 April 9, 2019
Anthos (previously known as Cloud Services Platform) has just gone GA at Google Cloud Next. One of its new features is Anthos Migrate, a tool for migrating monolithic apps directly to containers. Issy Ben-Shaul is a Director of Software Engineering at Google Cloud and led the team building Anthos Migrate. He talks to Craig and Adam about it.
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi and welcome to the "Kubernetes Podcast from Google." I'm Craig Box.
ADAM GLICK: And I'm Adam Glick.
CRAIG BOX: Welcome, everybody, to the week of Google Cloud Next. Next is just starting as we record our show this week. And we're having our live show here tomorrow. So we'll keep you up to date with the news as it happens this week. And we'll have something special in store for those of you who are able to join us tomorrow. In the meantime, what's making you happy this week, Adam?
ADAM GLICK: Oh, just a plethora of things. Enjoying "The Umbrella Academy" on Netflix. I've started to tinker with "Hearthstone" for any of you who are hooked on online digital games.
CRAIG BOX: And who isn't?
ADAM GLICK: Apparently not me. But we're seeing-- I'm getting into it. It takes a while. It's a learning exercise.
And then finally, I stumbled across a band while listening to one of the channels I always do called Sunrise Skater Kids, which is one of what I've discovered is a number of bands by a guy named Jarrod Alonge. They are all basically meta parody bands of various musical styles, but all surprisingly entertaining.
He's got a death metal one. He's got a screamo band. He's got a pop/punk band.
CRAIG BOX: So he's Weird Al for YouTube.
ADAM GLICK: Yes, very much so. The song about death metal swimming in a swimming pool, and he does the entire thing with, like, water wings on-- just beautiful.
CRAIG BOX: It's a visual art form. You have to watch it online.
ADAM GLICK: [LAUGHS] True enough.
CRAIG BOX: I enjoyed a very long Saturday this week. You may remember in episode 43, I said that Tuesday just vanished, completely disappeared for me. But in return, I had 44 hours of Saturday. And I enjoyed both ends of them. Had a lovely flight and managed to watch a couple of films on the way over. Do you ever find on a flight there's enough time, if you get the requisite nap in the middle, you can watch all of the first film and then 2/3 of the second one? So I've got a third of a film I have to watch when I fly next.
ADAM GLICK: I see. You will have to let me know what your daytime escrow service is that stocks that all up for you that you can hold it like that. But I'm curious what film you did not finish.
CRAIG BOX: I did not finish "On the Basis of Sex," which is the biopic about Ruth Bader Ginsburg. I did, however, a few months back watch the documentary "RBG" about her, which was also available on the flight. And I can thoroughly recommend that. And in a week or so, I should hopefully fill you in on how the final third of this movie was.
ADAM GLICK: Excellent.
CRAIG BOX: Let's get to the news.
ADAM GLICK: The annual Google Cloud Next conference has kicked off today. Perhaps the biggest news that touches the Kubernetes world was the announcement of Anthos. Anthos is Google Cloud's managed Kubernetes platform that provides container orchestration, configuration management, service mesh via Istio, Cloud Build integration for CI/CD, and connects with their newly announced serverless offering, Cloud Run.
Anthos is the new name for Cloud Services Platform that Google Cloud announced last July. And the GA demo came with some interesting new details and upcoming features.
The first of these was that Anthos will not only do hybrid deployments, which were announced and demonstrated last year, but will now do multi-cloud as well. As part of the keynote, a connection to another cloud provider was demonstrated.
Google Cloud also showed off a connection to other Kubernetes clusters that aren't part of GKE, giving you a single view of all of your container clusters, regardless of distribution or location. They also demonstrated the ability to install Kubernetes applications from the GCP marketplace to these other clusters.
CRAIG BOX: In related news, Google Cloud announced the beta of Anthos Migrate. Anthos Migrate is a migration tool that takes existing monolithic applications and breaks them down into smaller microservices and containers, then moves them into Anthos or GKE. We'll dive deeper on this topic in today's show.
Google also announced a host of partnerships with Anthos across system integrators and hardware and software vendors. Finally, they announced Cloud Run and Cloud Run on GKE. Cloud Run is a new managed serverless service built on the open-source Knative project. Cloud Run allows you to run serverless apps in Google Cloud or on Anthos, thus giving you the convenience of serverless apps in the cloud, as well as the portability to run them wherever your GKE clusters are running.
ADAM GLICK: Cloud Foundry has announced that Project Eirini, the new vowel-heavy Kubernetes-based runtime, has passed its first tests and will now enable customers to choose which orchestration for containers they would like to use, the existing Diego or Kubernetes. Conceptually similar to Docker supporting application stacks on either Swarm or Kubernetes, we're glad to see more vendors supporting the infrastructure of the future.
CRAIG BOX: The Open Policy Agent, or OPA for short, has been approved by the CNCF Technical Operating Committee for incubation. This promotes OPA out of the sandbox that it entered in March of 2018.
The project exists to help organizations to find common policies, and they're executed as queries in a new language called Rego. A subproject of OPA called Gatekeeper integrates OPA with Kubernetes and is a collaboration of Google, Styra, and Microsoft.
ADAM GLICK: CRI-O, or CRI-O, has been accepted into the CNCF incubation stage. CRI-O was originally designed by Red Hat and Google and is an implementation of the Container Runtime Interface that connects to Open Container Initiative compatible run times, such as containerd or gVisor. The project is currently maintained by employees of Red Hat, Intel, and SUSE.
CRAIG BOX: Cloud Native Buildpacks, tools for taking code and building OCI-compatible images, have entered beta. Originally developed by Heroku and adopted by platforms like Cloud Foundry and Knative, Buildpacks auto-detects the dependencies and language of your code and build container images without the need to make and maintain Docker files.
A primary benefit of this pattern is avoiding having to manually rebuild, update, or rebase your code if something in the Buildpack changes, such as a security vulnerability being patched in one of the included components. This helps lessen your management and security burden.
ADAM GLICK: Deep dives new 1.14 features are starting to roll out on the Kubernetes blog. Michelle Au of Google, Matt Schallert, and Celina Ward of Uber have posted celebrating the general availability of Local Persistent Volumes. First available 1.7 and betaed in 1.10, a Local Persistent Volume represents a file volume directly connected to a node.
This is important, as you can set up a Persistent Volume Claim, or PVC, then make sure that if a pod needs to be moved or restarted, it happens on the same host that has the Persistent Volume. It's just a shame they didn't call them PVC pipes.
CRAIG BOX: This week's regularly scheduled vulnerability is not in Kubernetes but instead is in Envoy, the cloud-native proxy. Two CDs, a null parsing bug, and a path normalization bug, both rated high, were patched in 1.9.1. That necessitated the release of new software that includes it, and new Istio releases have been pushed in both 1.0 and 1.1 series.
ADAM GLICK: Rainforest has posted a blog this week talking about their migration to Kubernetes from Heroku. They talked about hitting limits with scaling and looking to a number of alternatives, including Elastic Beanstalk, Elastic Container Service, Convox, and Kubernetes.
They talked through why Kubernetes was the best choice and then which Kubernetes service to use, as they evaluated the three major cloud providers. They provided some good insights and evaluation rationale for anyone considering Kubernetes and Kubernetes as a managed service.
CRAIG BOX: Google Cloud has announced expanded role-based access control with the beta of Google Groups for GKE. This enables GKE administrators to not only use GCP user accounts and IAM identities for control, but also now G Suite groups. New documentation has been posted to help users get started.
ADAM GLICK: Start up your wind machine and throw that goat, because Red Hat has announced MetalKube, a way to more easily set up and manage Kubernetes on bare metal servers.
MetalKube was created to provide an alternative to customers who were having to install a full OpenStack environment. The project is still new, with just six contributors. But the project is live on GitHub, and they're happy to have more people join in.
CRAIG BOX: Krew, the kubectl plugin manager, has been accepted as a subproject of SIG CLI. Krew was developed by Luk Burchard and Ahmet Alp Balkan from Google Cloud, and is itself implemented as a kubectl plugin.
ADAM GLICK: Finally, gVisor finally has a website. Located in the new online cool kids neighborhood, TLD of dot dev, gVisor.dev provides a much friendlier place to learn about the sandboxed container runtime and get involved in the project than the previous GitHub page. Check it out. And you can even see their new space helmet logo.
CRAIG BOX: And that's the news.
ADAM GLICK: Issy Ben-Shaul is a director of software engineering with Google Cloud. He leads the team responsible for migration technology and led the team building Anthos Migrate. Previous to his time at Google, he founded numerous companies including Velostrata, which was acquired by Google Cloud. Welcome to the show, Issy.
ISSY BEN-SHAUL: Thank you. It's great to be here.
ADAM GLICK: Good to have you here.
ISSY BEN-SHAUL: Thanks. Same here.
CRAIG BOX: Congratulations on the announcement. When people are adopting a new platform like Kubernetes or GKE, do you find that they pick up old applications, or do you find that they use only for new applications or some combination of the two?
ISSY BEN-SHAUL: That's an interesting question. So we find quite often that customers are initially starting with cloud native when they come to this new platform like Kubernetes. But then they realize all the benefits that come with it-- you know, the agility in the developer productivity, the density-- all the benefits that come with orchestration like Kubernetes-- and say, hey, I wish we could do the same for all the legacy workloads that we have on-prem. And could we do something like that for them? That's where it becomes the appetite to start and modernize those workloads.
ADAM GLICK: How are people migrating their apps today? Right now, when they want to get into Kubernetes, what's the path people have to take?
ISSY BEN-SHAUL: Maybe I should step back, Adam, and tell you a little bit about how they're migrating today before even going to Kubernetes. Would that make sense?
CRAIG BOX: Sure.
ISSY BEN-SHAUL: So at Velostrata, we've been doing migration for several years focusing on enterprise workloads and making sure that they're basically onboarding the workloads in a way that is most transparent and least disruption, reduce the downtime. We developed this unique streaming technology that really makes sure that, on one hand, they get early validation. They also get reduced risk, which is something that obviously CEOs really, really care about, their workloads to be running so they can test before they migrate, they can basically revert back if something doesn't work.
So provided this whole technology and solution that makes them work. And that's how they've been working. We call this, as you know, lift and shift-- migrate workloads as is. That has been the mainstream way of enterprises migrating to cloud and realizing all the benefits that come with going into the cloud and realizing the fact that, when they move to the cloud, they get all the services that are available to them, the security and reliability that is unmatched for, and of course, the agility that comes with IaaS.
But as they've been moving to the cloud, they also realize that that's a good time also to modernize. And to your question that you asked before, when they're realizing that, in addition to lift and shift, they get the benefits of write once, run everywhere, they can get the benefits of density, all the benefits that we'll talk more about that comes with an orchestration platform like Kubernetes, then they say, hey, can we do something to modernize? And can we modernize the workloads that we already have?
The fact that, even though they have workloads that are running, that are written from scratch, they have tons of legacy workloads. And until today, they didn't have any good effective solution to modernize. When they start to containerize or modernize, usually what they do, if they decide to do it before, they start to get into a very tedious process of breaking the monolith, breaking their workloads, making them-- and that is a very tedious, very long, very expensive process that effectively blocks their migration to the cloud.
So then, the enterprises get into some dilemma. On one hand, they've decided that they want to move to the cloud. On the other hand, if they insist on using the manual way of doing modernization, it takes a long time, they get stuck, and they're in a position where they're not getting their work done, which is one of the reasons why they've started to do some so-called local modernization, which is part of Anthos GKE On-Prem offering.
But in general, as I said, the migration has been either lift or shift, but then it doesn't address the needs of modernization. Or if they start to modernize, they get stuck and it takes years of projects and not much progress.
CRAIG BOX: Let's have a look, then, at what it was like in the VM world before Kubernetes was a platform.
ISSY BEN-SHAUL: You mean from physical to virtual or VM?
CRAIG BOX: Or even from virtual to cloud.
ISSY BEN-SHAUL: OK.
CRAIG BOX: If you think back to a few years ago where Puppet and Chef and so on were the new hotness. People would sometimes just lift the VM as it was. They'd same, I have a VM format thing. I'm going to convert it to a machine image. And then, the other approach is, I'm going to rewrite everything. I'm going to take the bits that are on that and then write some scripts that will let me deploy into both places. Clearly, there's pros and cons to both situations. What have you seen people do in those two cases?
ISSY BEN-SHAUL: Again, it really is essentially a mix. We see some customers that are essentially deciding that they want to rewrite everything. People are deciding to be more pragmatic, and they say, OK, let's migrate the workloads as is. And as phase two, we'll modernize. What we see is there's no clear cut. It really depends on the customer situation.
For instance, there are customers that have, for numerous reasons, decided that they need to move away from their existing hosted or private data center. For instance, they need to evacuate it in order to avoid renewing their contract. So they need to move everything. Now we've been customers saying, our data center has been in a prime real estate. We already sold it, literally, and now we need to evacuate everything, no matter what. In this case, they don't have time to modernize everything before that.
CRAIG BOX: So they're going to do the quickest possible thing they can do.
ISSY BEN-SHAUL: Quickest possible. Lift and shift. Move into the cloud. And then, in phase two, you modernize.
Some other customers take the other approach. They say, you know what? I don't want to be in a situation where I actually migrate the workloads and I don't get the benefits of the cloud because it's a lift and shift. I don't get access to some of the services, et cetera. I don't get the benefits of fragility, of scale out, et cetera, because I'm using the old way. And they decide on doing all the optimization and rewrite initially on-prem, and then they move. So it really depends.
By the way, another consideration is it's not binary all or nothing. It depends also on the type of the workloads that you have. Maybe, if you have legacy workloads-- if you have a third-party workload, there's not much choice. You don't have access to the source because you cannot play with it. You cannot rewrite it.
Some other workloads are just not always able to do the change. So it's really-- I don't think that there is a binary all or nothing. It's typically a mix of the two.
ADAM GLICK: What are the lessons that you've learned from doing the lift-and-shift wholesale migrations?
ISSY BEN-SHAUL: So the lessons that we've learned is that, on one hand, for some workloads, this is a great, sufficiently workable solution. And by the way, when we say lift and shift, there's also variance. Like you can do lift and shift and optimize and you can right size. You can do things that are at the outset. And those things help get, for some workloads, what they need. However, there are some drawbacks to this solution.
For instance, one of the things that we see is that people struggle with the cost efficiencies associated VM migration. The fact that you move a VM that was over-provisioned on your VMware environment and you didn't have to pay for it because you pay for the host and you don't pay for instance.
And then, now you move to pay by the instance model, and then you have to account for the VM-to-VM migration to make sure there is enough capacity in that instance in order to avoid spikes. Then, you actually end up overpaying for the workload. Then it becomes really, really a cost prohibitive solution. Or, if you're trying to rightsize too much, you might affect the performance of the application.
So what we found out is that it actually affects the cost. And one of the things that people have been trying to see-- how can I increase my density? The other thing that happens is that some customers say that, by virtue of doing lift and shift, some of the burden that I had in terms of--
For instance, I still have the same number of operating system images that I need to manage. Nothing changed. So I still have to patch all my OS images. I still have to pay for the licenses of them. I still have to worry about many things that I had to worry on-prem. So in that sense, it's not an advancement. It's just the same thing.
And given that I moved to the cloud, they say, hey, there are better ways to do it. We know that. We know that Kubernetes software has, among other things, the ability to eliminate OS management, the ability to write once, write everywhere, the ability to-- and all those things, you don't get the benefit of them. And therefore, for many of the workloads, it's not as satisfactory a solution.
That's why we've been thinking about ways to make it better. But again, I don't think it's, again, black and white. Not all workloads are fit for containers, but on the other hand, there are many that are. And by virtue of not doing them, you're actually losing a lot and not getting all the benefit you can get from migrating to the cloud in the first place.
CRAIG BOX: So the offering you've announced today is called Anthos Migrate. Talk us through that program.
ISSY BEN-SHAUL: Sure. Some of you may have a chance to even see today's keynote in which there was a live demonstration of the solution. Very exciting for us. Yeah, so Anthos Migrate is a really, I would say, revolutionary, first of a kind solution that enables to take existing VM-based stateful workloads-- and I emphasize state because it's not just a code, it's all the state-- so the data, configuration file, database, everything-- and essentially, migrate into GCP, but at the same time, also transform them into containers that are running and managed by GKE, Google Kubernetes Engine.
And all this is done without-- in a zero-touch model without having to rewrite or reconfigure or change the way those applications are written, which also makes it applicable for things like WebSphere, WebLogic, and existing application frameworks. So it's a really, I think, very revolutionary approach or solution that, until now, was not available.
ADAM GLICK: That sounds great. How does it work?
ISSY BEN-SHAUL: Basically, the way that Anthos Migrate is designed-- so it's very cool. I mentioned before we've been developing streaming technology that knows to take VMs and migrate them to the cloud. And we actually leverage that and build on top of that capability, except instead of doing the same streaming of VMs into VMs at the other end, what we do is, essentially, we think about it conceptually, taking from the VM or extracting from it in a intelligent way only non-system, operating system-related elements, cleaning them through some intelligent it's called adaptation that essentially eliminates all the elements that relate to the operating system, taking the user mode.
And then what we do is we take what we call a kind of generic container that essentially fetches the user mode information and streams it using our streaming technology, and then runs it as a container offering all the necessary elements from the container stuff, and then, on top of it, connects it to the Kubernetes Engine.
So we do it all in one journey, in one stream, that contains both the streaming as well as the actual transformation. Now, the interesting thing about Anthos Migrate is that we built that transformation technology as orthogonal to the streaming, which means that you can either migrate and modernize it one step, taking a VM from on-premises VMware, or-- by the way, soon-- other clouds, and then transforming them through Anthos Migrate directly into GKE.
Or alternatively, you can take VMs again, that are running native in GC or other cloud, and move them as well into GKE. Or, soon as well, being able to first transform them from VMs to GKE On-Prem and later on migrate them to GKE. So you can-- basically, we meet the customer where they are and decouple the migration from the modernization. But we can do it either one after the other in either way, or combined together, which is the default way.
CRAIG BOX: Once your technology has taken the user components out of the VM and moved it into the container, does the container then become something which is frozen in time, or can I go and make changes to the application?
ISSY BEN-SHAUL: That's a great question. So the way we look at this is a multi-phase process. So in the first stage, what we do is we take the VM contents outside of the OS part, and we essentially run it as a-- view it as a nested container running in the target platform or GKE. And it gives a certain set of benefits out of the box. We call it phase one of the modernization.
So the benefit that you get out of the box for this is, A, you don't have to deal with OS management and patching. B, you get the density, which is critical for those legacy applications because now you can run multiple containers in one instance. And that is sometimes, by our measurements, could be like 5x to 7x improvement in the density, which results in a lot of cost savings. So this is the other--
Third and most importantly, you start to get the benefits of the orchestration platform. So there's resiliency. You can decide as RSA when machine is down and can be brought up. You can get the visibility and monitoring. You can add things like Istio or CSM as what's called sidecar container, which gives you a whole new level of networking security, et cetera. And all this without having to touch the actual workload.
So all the benefits of the orchestration are available in phase one. Now, we call this all the benefits that have to do with the platform. Now, the next phase will be, now that I've migrated then, now can I actually take advantage of continuous integration using image management? And that's where we actually, by virtue of bringing that VM into a container, running in the container, we can also introspect the actual container.
And then, based on that, we build-- essentially, the way for you to get what's called phase two, which is basically extracting the image, moving into image management, the Docker image file, the Docker creating container registry, config files or config maps, and all the elements decoupling the storage from the data, from the image. And that will allow customers not only to get the benefits of orchestration, but also the ability to continuously develop and integrate and deploy their systems using the traditional CI/CD pipeline.
One thing to keep in mind which I didn't mentioned as part of the solution is that an important element of getting workloads into GKE is taking care of storage networking. And one of the things that we do-- remember, we're talking about running stateful workloads-- is we also know to take VMs that ran that have maybe multiple disks and multiple partitions, and we know how-- we created something called a storage aggregator, which essentially abstracts this whole into one root file system that maps into a PersistentVolume. And that is another important element that basically abstracts it, and it allows you to run as, again, decoupled from the logic. And that storage aggregator is an important part of our solution. But again, we use Kubernetes networking and everything that essentially gives focus on the image.
So to your question about what can they do in terms of development, that comes with what we call phase two. And phase two actually is generated on the fly incrementally. We actually create the templates that allow you later on to build on top of them all the artifacts that are needed for you to do continuous integration and then continuous development or deployment.
CRAIG BOX: So if I have an application from a vendor and I have that installed on the VM in my legacy environment, and I migrated V1 and modernized and gone through phase one using Anthos Migrate, then the vendor comes out and gives me V2 of this application. Am I installing that again on my legacy environment and repeating, or am I doing something in my newly migrated container?
ISSY BEN-SHAUL: Yeah. Again, so the idea is if you have an image-- that by the way, could be an image that is certified by the vendor and it's already containerized. We'll basically know how to take that image, and you can actually use it in order to provide an upgrade, or you can use rolling upgrades for Kubernetes or any other way to do it. And we'll know how to [handle] decoupling from all the other artifacts. They will just work out of the box, assuming you have this upgrade. So that's where phase two comes into play.
Now, I should say that the announcement that we made this week for our beta version that's coming out is still phase one. Phase two will come shortly after. We'll have a preview of phase two coming in along with the beta, where there will be like an alpha version, and then phase two will come after.
ADAM GLICK: What are the best cases for when you would want to use Anthos Migrate? What things make most sense for using Anthos Migrate to move things into containers versus those that work more off of the traditional technology you have of migrating lift-and-shift VMs?
ISSY BEN-SHAUL: Actually, this is one of the most important questions that we're getting from customers that are playing with this new idea. They're asking us, hey, could you give us a way, a guidance, to recommend which workloads would go this way or another? And actually, one of the feedback that we got from our alpha customers-- and which, as a result, we're actually developing a module for doing discovery and providing recommendation for which way to go. This is an interesting segment.
Now to your question, Adam. So the workloads that customers can work today, that fall into a sweet spot, are essentially workloads that are things like Tomcat, WebSphere, WebLogic, application frameworks for which there is also a good solution we can offer are those that are impossible to free ride because you don't even sometimes not control the source code. And these are applications that are really a good fit.
Or the other one that are low-hanging fruit are things like bursty workloads, dev test workloads, things that you need to shrink and grow and expand. This is another kind of workloads that typically work in. But really, there is no limits, in a way, to the type of workloads. There are some workloads that are specifically not always a good fit, which we, on our discovery model, will prevent you from doing.
For instance, if you have a workload that has some kernel module, well, that's something that cannot be containerized because containers by definition strip the operating system piece out of it. So this is not going to be a good fit. Or if you have workloads that require immense amount of memory, SAP HANA, these are things that are not a good fit for containers.
But typical customer transaction processing or workloads that are being done or used like on a day-to-day basis, these are the ones that we recommend. And as I said, as part of our solution, we'll actually make that role recommendation. Now look, actually, I'm thinking more along. If you have software that has some license that is tied to the hardware, that's another one that containers would not be a good fit for.
But we've been successfully moving things like WebLogic, WebSphere, application servers, business logic, marketeer applications, middleware-based, et cetera. All those workloads actually been tested with our customers quite successfully.
CRAIG BOX: You've mentioned that you've been working with customers as you've gone through the alpha phase of this product. What is the experience that they've had using the tool so far?
ISSY BEN-SHAUL: So we did have a chance to work with a number of alpha customers that worked with a design partner for us, actually gave us a lot of good feedback. And so far, the experience was-- so from the value proposition, the ability for them to see the density benefits that they get out of the box without having to make any changes, the elimination of OS patching and management headaches, then the security and the visibility that they get from the platform has been perceived really greatly by them.
As I said, where they were looking for more guidance is for us to help them, tell them what should they move to containers versus not. And that is something that we were not aware of and gave us the feedback to actually go and do it, essentially built that model into the discovery solution.
In terms of compatibility, obviously, this is a work in progress, and we've been able to fix a lot of issues, but there was not any major architectural thing that inhibited us from making the work. And they migrated successfully on a number of workloads. So they're pretty excited, and also, some of them are speaking on stage this week. For instance, Cardinal Health, which has been working with us for quite a bit. They've also migrated thousands of workloads for us with the VMs to VMs and now are making the next modernization, both from on premises as well as from GCE.
And there was another customer that worked with us is Atos, which is a great partner for us for both VM-to-VM but now growingly into the containerization world. And they've been, again, very excited about it. They also want support for Windows migrations. This is something that is going to be on the roadmap. Once we have--
CRAIG BOX: It's on the way.
ISSY BEN-SHAUL: -- Windows containers, that will become down the road as well. So this is, again, a missing element, but we're working on that as well.
CRAIG BOX: We'll put links in the show notes to those sessions as they become available.
ADAM GLICK: Could someone use Anthos Migrate to move their application to any Kubernetes cluster, or is this unique to Anthos and GKE?
ISSY BEN-SHAUL: Architecturally, there is no reason why this solution in general would not be applicable to other Kubernetes. We are, obviously, focused on Kubernetes. However, for this particular version that we have right now, and we have been focused on making it work for GKE. We will extend it also to other Anthos entities like GKE On-Prem down the road, but right now, it is tied into the GKE framework.
Now, as Anthos' vision grows and expands and spans multi-cloud, this is something we can potentially consider. But there is no architectural element that will prevent us from being able to work as long as this is tied to Kubernetes, which we are strongly tied to. So there's containers that we support, as well as Kubernetes is tying the control plane for all the orchestration.
CRAIG BOX: You've mentioned a couple of things about vision and roadmap. Where would you like to see this product go?
ISSY BEN-SHAUL: The vision that we see for Anthos Migrate is essentially being the onboarding mechanism to Anthos. And wherever Anthos goes, we go after. So as Anthos expands into on-prem, we want to provide the customers, essentially, the option that they have with Anthos in general with the onboarding element. So if they want to onboard--
And by the way, not only the onboarding, but also the mobility afterwards. So there could be customers that now for the reason off performance or agility or cost or security, they want to go, for instance-- first, there's security consideration or constraints that prevent them from going into the GKE. They can go to GKE On-Prem. We'll meet them there.
But at some point later, they're getting clearance. They can move to the cloud or they want to take advantage of a specific performance or services that are in the cloud that are not available on-prem, in which case we'll mobilize them using Anthos Migrate from on-prem, whether VM or container, into the cloud.
So the idea is to migrate and mobilize as well as modernized workloads throughout the Anthos span. When Anthos goes to AWS or maybe other clouds, we'll meet them there. That's where we see the vision expands along with Anthos.
ADAM GLICK: If someone wants to learn more about Anthos Migrate, where can they go?
ISSY BEN-SHAUL: Yes. Just go to cloud.google.com/velostrata.
CRAIG BOX: Brilliant. All right. Issy, thank you so much for your time with us today.
ISSY BEN-SHAUL: All right. Thank you guys.
CRAIG BOX: You can find Issy on Twitter @Issy972, and you can find all about Anthos at cloud.google.com/anthos.
Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you happen to be at Google Cloud Next, come find us and grab a sticker. If you're not and you have any other feedback for us, find us on Twitter @KubernetesPod or reach us by email at email@example.com.
ADAM GLICK: You can also check out our website at https://kubernetespodcast.com to find show notes and complete transcripts of each show. Until next time, take care.
CRAIG BOX: See you next week.