#28 November 6, 2018
TriggerMesh is a new serverless management platform built on top of Knative. Co-founder Sebastien Goasguen joins Adam and Craig to discuss serverless, and potential trips to space.
Do you have something cool to share? Some questions? Let us know:
CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.
ADAM GLICK: And I'm Adam Glick.
[MUSIC PLAYING]
Do anything funny last week, Craig?
CRAIG BOX: I did. One of my favorite comedians is a guy named James Acaster. He's a British guy from a place called Kettering, which I haven't heard of, either, so I don't expect many people listening to this will. But he is a fantastic stand-up comedian. And he actually recorded a bunch of his old shows for Netflix this year. I do get the impression there's a lot of Netflix fans out there.
If you're interested, there's a series called "Repertoire," which is these four episodes of three shows that he's done and then one show sort of tying them all together, which were recorded earlier this year. So if you like your British comedy, if you like your sort of dry humor, it's thoroughly recommended. And his new show is very, very funny as well. And for anyone in Britain, he'll be touring there later next year. That was my Halloween night.
ADAM GLICK: Uh huh. Well, given that it was Halloween, All Hallows Eve, here in the States, or Dia de los Muertos for those celebrating on the 1st, me and my wife decided to have a little bit of fun-themed humor and played a game called Gloom, which is a card game where you're trying to make unfortunate things happen to your people and have them meet untimely deaths while make good things happen to everyone else's characters. And it's a fun, storytelling, kind of point-taking game.
CRAIG BOX: Sounds very empathetic.
ADAM GLICK: Indeed.
CRAIG BOX: Speaking of Day of the Dead, I believe it's the 20th anniversary of the game "Grim Fandango."
ADAM GLICK: Excellent game.
CRAIG BOX: I've been meaning to play it for almost 20 years. It's one of those things I had-- we had "Sam and Max" and "Day of the Tentacle", some of my favorite games of the era. And then "Grim Fandango" came out, and I just don't know what it was. I just never got around to playing it. It has been remastered since, so I might look that up.
ADAM GLICK: There's even a version on Android, if you want to take it on the plane on your upcoming flight. Or, of course, you could listen to stuff. And I hear that you can hear us in a new location now.
CRAIG BOX: Yes. Well, the good news for Spotify listeners and possibly not that relevant if you're already hearing the podcast, but do tell your friends who like Spotify that you can now subscribe to yonder "Kubernetes Podcast" on Spotify. And we've uploaded all of our back issues, of course, so feel free to dig in and listen to our lovely interviews.
ADAM GLICK: Also a thank you to Peter Benjamin, who put out a list of Kubernetes resources this week. And in there he listed the podcast. So thanks, Peter.
CRAIG BOX: Let's get to the news.
[MUSIC PLAYING]
ADAM GLICK: TriggerMesh has launched a serverless platform built on top of Kubernetes and Knative. Their offering is currently available to customers in early-adopter program. And we'll be speaking with co-founder Sebastien Goasguen in the interview part of the show.
CRAIG BOX: Istio 1.0.3 was released last week. The team are fixing bugs and issues from the 1.0 series, and this release brings some performance and scalability improvements. Pilot can now deliver endpoint updates to 500 sidecars in under one second, and Mixer's CPU usage has been reduced by 10%.
ADAM GLICK: Heptio released Contour 0.7.0, their Envoy-powered ingress controller for Kubernetes. This version brings support for TLS 1.3 and timestamps for incoming requests, as well as GZIP compression for responses. It has also been updated to support Envoy 1.7.0, which brings HTTP/1.0 support and a number of bug fixes.
CRAIG BOX: Uber published a write-up of their internal container scheduling platform, which they call Peloton. Peloton has a framework built on top of Mesos, which is designed to more closely emulate Google's Borg for supporting their batch workloads. Uber are looking at building support for stateless and stateful workloads, as well as a wrapper layer to expose the Kubernetes API, which they claim will be possible as the systems are conceptually similar. Peloton is currently internal to Uber and has not been open sourced.
ADAM GLICK: The CNCF posted another profile of a Google Summer of Code student, Jiacheng Xu, creator of idetcd, a CoreDNS plugin used for identifying nodes in a TensorFlow cluster without domain name collisions.
CRAIG BOX: Azure Kubernetes Service had a series of announcements this week, including the end of support for Kubernetes 1.7 and 1.8 at the end of November, an open-policy agent controller, proxying for the Kubernetes web dashboard through the Azure Cloud Shell, and general availability in the UK West region.
ADAM GLICK: And that's the news.
[MUSIC PLAYING]
CRAIG BOX: Sebastien Goasguen is the co-founder of TriggerMesh, a serverless management platform based on Kubernetes and Knative. He's a 20-year veteran of open source, recently working on Kubernetes as a director of cloud computing at Bitnami, after their acquisition of his company Skippbox in 2017. Prior to Kubernetes, Sebastien worked on Apache CloudStack and libcloud, and grid computing initiatives at Purdue, Clemson, and CERN. He is the author of three O'Reilly books, including "The Kubernetes Cookbook." Welcome to the show, Sebastien.
SEBASTIEN GOASGUEN: Thank you, Craig. Thanks, Adam. Thanks for having me.
ADAM GLICK: Great to have you. Your career started in academia. What sorts of systems did you work on in the grid days?
SEBASTIEN GOASGUEN: So I started on high-performance computing. My background is in computational science. I was solving Maxwell's equation using supercomputers, shared memory computers. And it was such a pain that I started building what at the time were called Beowulf clusters, so a bunch of PCs racked up together in a lab to try to make a supercomputer. So that led me to cluster computing, which very quickly became grid and then cloud.
CRAIG BOX: And how did cloud become open source for you?
SEBASTIEN GOASGUEN: That was very interesting, because when I started working on parallel computing problems that really needed a lot of compute, my advisor at the time had no clue about parallel computing. So he bought a big HP workstation that had two CPUs. And he said, here you go. You have two CPUs, so it's going to work, run --
CRAIG BOX: I hope you didn't use both of them.
SEBASTIEN GOASGUEN: So that's the funny part. You know, he said, two CPUs, it's going to run in parallel. And then it took me a bit of time to figure out that you needed to do things extra to make it run in parallel. So I looked at things like Auto-Parallelizer at the time, code that allowed you to parallelize your for-loop automatically. But those tools were very expensive. And automatically, you know, I gravitated towards open source, because Linux was getting fairly pervasive at the time, especially with the tooling around building clusters. So all that tooling to create Beowulf was really open-source software. So that's really when I started.
And then at Purdue, early 2000, everything was open source at the time, the frameworks to the private computing, even in MATLAB, were open-source software libraries, of course, MPI and so on. So that's really how it started.
CRAIG BOX: And from there, what led you to leave the ivory towers of academia to the commercial and then the startup world?
SEBASTIEN GOASGUEN: I thought quite a bit about this. And really, my career seems like I hopped from topic to topic. But in fact, I've been following the same challenge, which is access to compute, to be able to solve problems. And the main reason was that using this shared memory cluster was a total pain. You had to wait in line. And then if your code had problems, you had waited 48 hours or five days, and then your code was failing. So it was very difficult. So that's why we went into building your off-the-shelf clusters. We were trying to get compute cheap and easily off the wall, like a plug.
So from that, I went into grid computing, where-- how can we link multiple supercomputers together? There were lots of initiatives in the US, like the Open Science Grid, the TeraGrid. And I got involved in those initiatives. And this was all about democratizing compute and putting it in the hands of the researcher to solve problems.
But what happened is that instead of concentrating on the actual problem, the actual computational problem, nano-electronics, molecular electronics, I actually tried to figure out how to bring that compute to people. So that went grid computing. And at some point, the problem was very much that you needed to deal with how do you package an app and make it run in those clusters? How do you handle dependencies? How do you ship it? How do you guarantee the execution?
And that's when virtual machines became a really nice technology to be able to provide that environment. And as soon as we started talking about virtual machines in large-scale clusters, we were talking about clouds. So that's when-- now we are 2006. Amazon also releases EC2. And now instead of talking about grids, we start talking about clouds. And now I really become more of a cloud researcher doing research, academic research into cloud systems.
So back to your question, how did I make the jump? And what I saw is that the industry actually has a ton of power in terms of research power and engineering power. So you folks at Google have a lot of muscles to flex to be able to solve problems, to build systems, and AWS, same thing.
And I realized that if I really wanted to have a big impact in the industry or even in the community, making the jump to more of a entrepreneurship role and joining the community was going to be where I was going to have more impact. So I left academia. And I said, you know what? Let's try to go help the world, the community, from a more industry and also business standpoint.
CRAIG BOX: Among the list of words you use to describe yourself online, you say you are a former astronaut hopeful. Where would you have found the time for that?
SEBASTIEN GOASGUEN: Yeah. So I've always wanted to be an astronaut since I was a kid. So I went to Space Camp. I didn't go to Huntsville, Alabama, because I was French. So I ended up in the French space camp. We had the French astronaut that flew the shuttle back in the '80s. But then I couldn't apply for a NASA astronaut, not being US.
And what happened is that in 2008, the year my first boy was born, the European Space Agency opened up recruitment. And I was like, yeah, I'm going for it. So I applied. It was the first time they were recruiting in 15 years. And they selected me. And they sent me a plane ticket to go to Hamburg, Germany. Took the tests, and didn't make it. So that's why it ended as hopeful.
CRAIG BOX: Well, it's good for the Kubernetes community that was the outcome, anyway, sir.
ADAM GLICK: Indeed. You also have a little bit of a history with serverless, even though it's a fairly new technology. You were involved in the Kubeless project, correct?
SEBASTIEN GOASGUEN: Yes, totally. I created Kubeless not in my garage but in my basement with a good friend of mine, Nguyen Anh-Tu, who was the first engineer to join me at Skippbox.
ADAM GLICK: One of the kind of ongoing discussions I hear a lot is what defines serverless? So as someone who helped create some of that, how do you define serverless?
SEBASTIEN GOASGUEN: OK, so what is the definition of cloud? 10 years ago, that was the challenge. What do you mean by cloud? And now it looks like what do you mean by serverless, right? And I like to step back. I like to understand the context of technology and a little bit of the history. Even with Kubernetes it's extremely interesting, because it validates why you want to choose Kubernetes.
But serverless, to me, it's basically still the same search, right? We are searching for a way to make compute extremely accessible people so that they can solve their problem. And even folks like me who are originally in computational science, we don't want to deal with infrastructure. The fact that we ended up dealing with infrastructure was just an accident of life. We shouldn't have to learn how to write kickstart file and how to write a chef cookbook to provision machines.
What we want to be able to do is that we just want to be able to write the code for our application. And then, as corny as it sounds, we just want to click Submit. So serverless, to me, it's the natural evolution. We're starting to close the circle from grids, if you wish, cloud. And now we have Kubernetes that has really laid an amazing foundation for everybody.
And on top of Kubernetes, we can build lots of systems, including something like serverless, which is starting to make us forget about the infrastructure, or at least a certain number of people can start forgetting about the infrastructure. And we come closed circle where we get back to focusing on the applications. So serverless, it's very much a new type of platform as a service that allows you to deploy applications extremely quickly without worrying about the infrastructure. And it automatically manages your application in terms of scaling and network access and security authorization and so on.
ADAM GLICK: Gotcha. So as you're trying to abstract these away, what made you decide to build on top of Kubernetes as kind of a different form of infrastructure?
SEBASTIEN GOASGUEN: The reason I started working on Kubernetes in 2014-- I'm so sorry. We have to go back in time a little bit. So I see Craig with his Docker T-shirt. It's very interesting. I was very suspicious about Docker when it came out. And I paid attention to it because I needed to understand why people were getting excited by it.
And then serendipity struck. And as I was looking at Docker, I started looking at Kubernetes. And I was like, OK, this is the easy step. We need Kubernetes. Makes total sense in the data center. It makes sense because of the history from you guys at Google working on it, developing it. So I jumped on Kubernetes because I saw it as the underlying system managing Google Cloud.
Since I was a cloud guy, I was like, oh, my god, Google is open sourcing its management framework of GCP. And for me, it was as if AWS had open-sourced the underpinning of EC2. So I was working on CloudStack at the time--
CRAIG BOX: But CloudStack was the re-implementation. CloudStack was basically just a version of EC2, was it not?
SEBASTIEN GOASGUEN: Yeah, exactly. So why did CloudStack and OpenStack happen? It's because AWS didn't open-source EC2. So suddenly, I see Google creating Kubernetes, and I'm like, oh, my god, we do have now this foundation, right? And that's in 2014. And I jumped on it. And I was like, if I had to rewrite a new CloudStack, I would base it on Kubernetes. And indeed, I confirmed with Joe Beda that GCE was actually KVM VMs running in containers managed by Borg. So to me, it made total sense.
And now fast forward serverless. When you look at something like Google Cloud Function, my guess, without knowing all the details, is that Google Cloud Function or a Google App Engine, are also managed by Borg. So if we try to build a serverless solution, it makes total sense to build that serverless solution, FaaS, function as a service, on top of Kubernetes. And we've seen many solutions, Kubeless being one of them. That's why I did it on top of Kubernetes. But then you've got Fission, and you got Nuclio, and so on, and fn and other solutions.
CRAIG BOX: And now, of course, we have TriggerMesh. So my question to you is why now? Why is now the right time to start a company around serverless?
SEBASTIEN GOASGUEN: There are already quite a few companies in the serverless area, definitely. I'm not leading the charge here. And I don't pretend to be. If you look at the serverless landscape from CNCF, there are lots of companies in the serverless area. Most of them are very focused on AWS Lambda. And it makes sense because Lambda has been at the forefront. They've been pushing the serverless movement. So lots of those startups are developing tooling and services around Lambda.
But why now? So what we've seen in the last, let's say, two years, is that the FaaS solution that have been developed-- Fission, Kubeless, and so on-- we all had to tackle the challenge of building a FaaS kind of on our own. We had to figure out how to handle dependencies, how to handle scaling of the function to reduce cold start, how to do the builds, how to add triggers to the functions so that we can handle multiple sources of events and so on.
So all of us, we did our little cooking on our side, if you wish. We got some things right. We got some things wrong. But now you see Google coming out with Knative, and Pivotal being super active in the Knative community. Red Hat also being there, which now means IBM also being there. So I think when you look at technology and even open source, you have to be open-minded.
And now I'm seeing a lot of traction or a lot of resources being put in Knative. And the primitives that are being developed in Knative are exactly the primitives that we developed within Kubeless, that you can find in Fission, that you can find in Riff, that you can find in Nuclio, and so on. So I think now is a time for all of us to kind of get together and work on a common serverless foundation. And it makes total sense to me for this foundation to be Knative.
ADAM GLICK: Do you think that if the serverless trend is successful, basically, Kubernetes will be abstracted away from most developers?
SEBASTIEN GOASGUEN: It's a good question. And I think that we tend to want to see the world black and white, ones and zeros, but in fact, you have many different personas in the data centers. You have different personas of developers and operators. And some people, some companies, they're going to work with kubectl and the straight-up Kubernetes API. And it's going to be great for them. That's going to be what they want. They're very happy with it. They're happy with the control.
And some folks are going to say, hey, listen, this is way too detailed for me. I don't want to know all of this. I need something that's a little bit more high-level abstraction.
So I don't want to be as opinionated as some may be. I love Kubernetes. And I love its API. I love kubectl. I do it all the time. I'm very happy with it. I never thought that Kubernetes was difficult to install. Again, it depends on your persona and your background and your expertise.
But when we talk about serverless, definitely people who come in with more of a serverless persona, or more of an app, they don't want to have to deal with any of the low-level Kubernetes objects, which means that just like CloudStack, we used to advertise CloudStack as CloudStack just works. It's boring. You can forget about it.
I think that ultimately, you're going to see that people are going to say that the success of Kubernetes is that Kubernetes is boring. It just works. It's your operating system for your data center. And you're going to have people that are going to work at the low level dealing with the Kubernetes API. But then most developers and so on will be at a higher level and definitely most likely at a Knative level.
ADAM GLICK: How would you describe what TriggerMesh is?
SEBASTIEN GOASGUEN: TriggerMesh is actually a cloud to deploy your function and manage your event triggers. So we call it serverless management platform. We build on top of Knative and of course Kubernetes and Istio. And we bring this additional layer to bring compute extremely easily to people. We want to hide the infrastructure management. We even want to hide the management of Knative and management of Istio. And we provide a cloud service.
So it's very much a clone of Google Cloud Function, if you wish, or a clone of AWS Lambda, with the kicker that because we are layering it on Kubernetes and Knative, we're also preparing it for on-prem deployment. If any enterprise is ready or interested to get a serverless experience on-prem, then they will be able to get TriggerMesh for themselves.
CRAIG BOX: Is it named after the fish?
SEBASTIEN GOASGUEN: TriggerMesh?
CRAIG BOX: Yeah.
SEBASTIEN GOASGUEN: Is there a fish called TriggerMesh?
CRAIG BOX: There's a thing called the triggerfish.
SEBASTIEN GOASGUEN: Oh, OK. No. It's really because we're focusing on the event triggers. Deploying functions is actually fairly easy. And if serverless was just about deploying a web hook-- I call it CGI on steroids. So that would be too easy. The hard part is actually to handle the events and the event sources. So for example, you can deploy a function in TriggerMesh and get that function triggered from an event in Google Cloud Storage or an event in AWS SQS. So this coupling of event sources that can come from any cloud plus the deployment of the function on the TriggerMesh cloud, we think it's going to help a lot with cloud workload portability and hybrid solutions. But the name is really triggers, and then a mesh of functions that make up your application.
CRAIG BOX: When you write the O'Reilly Knative cookbook, you can have a triggerfish as your cover animal.
SEBASTIEN GOASGUEN: [LAUGHS] There you go.
ADAM GLICK: This is Craig's side business in logo design and suggestion.
SEBASTIEN GOASGUEN: You can talk to O'Reilly about this. I don't have a say in the animals.
ADAM GLICK: When you built TriggerFish, you built on Knative. What things are provided for you by Knative?
SEBASTIEN GOASGUEN: Knative is really bringing you everything you need to build a FaaS. The build system is probably the cleanest and the first part of Knative that you should consider. And what is the Knative build system doing? The build system, developed by Matt Moor and Jason Hall and so on, it's a fairly simple system based on Kubernetes custom resource definitions so that you can build Docker images inside your Kubernetes cluster.
But it's not only that. You can also have steps to do any other things. So you can create files. You can push to Git. You can do anything in your steps. And the Knative build is actually, even though Google doesn't mention it, you don't mention it, it's very much an open sourcing of Google Cloud Build, which is a very powerful tool, totally unappreciated, but extremely powerful tool. So the build system is extremely easy to understand. And it's a necessary component. You need to be able to put your function inside a container.
And then you have the Knative serving. Once your image is being built, you need to be able to deploy it. So serving does that for you. In the end, it's just a Kubernetes deployment. So there is not that ton of magic there. But the magic comes from the autoscaling, so scaling to zero, scaling up from zero, which was the tough part. And then you have the reliance on Istio, which that's probably the little bit of the critical part of Knative, because Istio is a big system.
So suddenly you are adopting Knative, which has all the good things, the good primitives that you want to do a FaaS. But suddenly, you are adding a dependency on Istio. Istio brings you a lot of things in terms of service mesh, but I wish that it would have been a much lighter interface to give you the autoscaling.
But anyway, you look at this entire system, and right away you're thinking, OK, I've got everything I need to build a FaaS. This is going to be production. Istio became 1.0, so I'm guessing that Knative will probably get 1.0 next summer. So this is going to be production system developed by the entire community and the strong vendors. So if you're looking at the long run, it makes total sense to choose Knative.
ADAM GLICK: I noticed that TriggerMesh is already active on GitHub. You have a number of projects that are already checked in there. What parts of TriggerMesh will be open source?
SEBASTIEN GOASGUEN: Yeah, totally. So you can go to github.com/triggermesh. And there you'll find first a CLI client for Knative. That's probably one of the first things that you'll see when you start with Knative. You don't have a client like kubectl. Because it's built on Kubernetes, you can use kubectl straight up. But people tend to like to have a wrapper that exhibits some convenience to deal with the objects.
So right away, we started building a client which we call TM, TriggerMesh, of course, so TM. You can use it with our cloud. But it's 100% compliant with Knative. So you can use TM with your own Knative installation. As a matter of fact, if people look at the GitLab merge requests right now, they'll see that TriggerMesh is working with GitLab to bring a Knative integration in GitLab. And the way that we're doing it is that you use the TM client, which is open source, to be able to deploy configuration object routes, and so on.
So the TM is open source. The client is open source. We also have a Terraform provider that's open source. We would love to get any feedback there.
And we've spent quite a bit of time going through this user workflow. What's the user experience that people are going to face with Knative? How do they go from, hey, here is my tidbit of Python or Go, how do I go from this to having a function that's running?
And what you want to do is you really want to concentrate on this user experience. So you'll see that we have quite a bit of build templates that are open source. We may contribute them upstream if there is an appetite for it. Definitely no reason not to do it. But we have build templates so that you can deploy your OpenFaaS function. That's a very interesting topic, actually, because any OpenFaaS function can be deployed in Knative, because OpenFaaS in the end just bundles a function as a container.
We have some build templates for Azure. So Azure functions can be bundled and deployed on Knative. That's also extremely interesting. So you'll find all of that in our GitHub repo.
CRAIG BOX: As Knative moves towards beta, at what stage do you think TriggerMesh will become boring, will just become infrastructure that people can depend on? When will it be ready for business-critical workloads?
SEBASTIEN GOASGUEN: That's definitely the goal with the cloud. And what you're hinting at is definitely correct, is that we're going to be dependent on the release cycle of Knative. So we won't be able to say that TriggerMesh is production-ready until Knative goes production-ready.
But there is so much to learn. And there is so much to help the community with, to participate in this development, that we need to get started right now. Even if Knative is just alpha, we need to know the system. We need to figure out what is the user experience of interacting with the system. What are the users trying to do?
And the real reason behind the cloud for us is that we want to put Knative in the hands of people extremely quickly so that we can start this feedback loop extremely quickly. We can get that feedback from users. And of course, some of them are not going to be happy. But we hope that we're going to find some early adopters that are going to help us design the system, design that interaction. And once Knative becomes 1.0 and is production, at the same time, the user interaction will be there. The workflows for end-to-end function deployment will be there. And we'll be in good shape.
CRAIG BOX: If someone is listening to the show and thinks, I'm interested in that, I'd like to be one of those early adopters, what would you like them to do next?
SEBASTIEN GOASGUEN: So great. Thank you, Craig. We have an early adopter link on the website, triggermesh.com. And you'll see that we have an EAP program. And please contact us.
At first, please be gentle. Understand that this is very fresh. And definitely we're taking a chance. We're taking a risk. But we're using our own system every day. We're starting to feel pretty happy. Put your functions in GitHub, Bitbucket, GitLab. Link it in TriggerMesh. Automatically your functions get deployed. You can trigger them. So yeah, join the early adopter program, and then give us some feedback. That would be great.
ADAM GLICK: Thanks for joining us, Sebastien.
SEBASTIEN GOASGUEN: Thank you.
ADAM GLICK: You can learn more about TriggerMesh at triggermesh.com. And you can find Sebastien on Twitter @sebgoa. That's at-S-E-B-G-O-A.
[MUSIC PLAYING]
CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, whether on Spotify or somewhere else, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter @KubernetesPod, or you can reach us by email at kubernetespodcast@google.com.
ADAM GLICK: You can also check out our website at kubernetespodcast.com. Until next time, take care.
CRAIG BOX: See you in Shanghai.
[MUSIC PLAYING]