#94 March 10, 2020

gRPC, with Richard Belleville

Hosts: Craig Box, Adam Glick

Richard Belleville works at Google on gRPC, a high-performance, universal RPC framework. Richard used gRPC before joining Google to work on it; he talks to the hosts about its history and derivation from Google’s internal Stubby, how it works, and how it differs from other RPC and messaging systems.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm Craig Box.

ADAM GLICK: And I'm Adam Glick.


ADAM GLICK: It's been an interesting time here in Seattle, where I'm based. Obviously, there's a lot going on. So I hope everyone out there who's listening is staying safe. But it has meant that we've had a lot of time at home. We haven't been going out quite as much as we normally do. And I had a chance to check out Season 3 of "Castlevania," which has come on Netflix. I think I have spoken about that earlier on the show.

CRAIG BOX: Yes, a long time ago.

ADAM GLICK: Yes, adaptations of video games into other media forms have not always gone so well. I think of the Pac-Man TV show, the Super Mario Brothers movie.


ADAM GLICK: [CHUCKLES] Exactly. But the "Castlevania" series really just continues to impress me, just the depth of the characters and the really interesting animation. It really took what was a video game of my childhood and has turned it into a really, really interesting and rich story. So I've been enjoying that. And any of you who enjoyed the video game in the past or the series in the previous seasons may also enjoy that that just came out in the past couple of weeks.

CRAIG BOX: Something that's been celebrated here over the weekend is the anniversary of the first media in "The Hitchhiker's Guide to the Galaxy" series. I realize that for many people listening to the show, it will be somewhat of a sacred text. But "The Hitchhiker's Guide" actually started off as a BBC Radio series in 1978.

And so it is the 42nd anniversary of "The Hitchhiker's Guide." We don't celebrate the 40th, obviously. It's the celebrating 42nd because, of course, that's what you get when you multiply 6 by 9. "The Hitchhiker's Guide" was definitely a series of books and radio plays that I enjoyed growing up. I do hear that a TV reboot possibly coming. I know, Adam, you watched the movie version of "The Hitchhiker's Guide"---


CRAIG BOX: --which was somewhat sanctioned by Douglas Adams before his death, but considered quite a divergence from the original rustic Britishness of the movies, shall we say, with its Hollywood implementation. It will be interesting to see what happens next.

ADAM GLICK: There's no reason to panic over that, man. Don't panic.

CRAIG BOX: Let's get to the news.


CRAIG BOX: The Istio project announced version 1.5. This quarterly release simplifies the management of the Istio control plane by unifying the microservices into a single binary called istiod. That doesn't mean microservices were a mistake. And a blog post on the Istio blog written by yours truly goes into the details. Other features include automatic mutual TLS now being on by default, a much more simplified installation, and the V2 telemetry system, which cuts latency and related CPU consumption in half.

ADAM GLICK: Another headline feature of Istio 1.5 is support for extensibility via WebAssembly, or Wasm. Google has built WebAssembly, a portable bytecode runtime born of the browser into the Envoy proxy. A standard interface called Proxy-Wasm will let these extensions run in any Envoy version or on any other proxy that implements the standard. The first use of this extensibility is in the new Istio telemetry system. An SDK is provided with support for three languages and more to come. Finally, WebAssembly Hub, built by Solo.io, our guests on Episode 55, rounds out the release.

CRAIG BOX: Google Cloud announced a Global Mobile Edge Cloud strategy. A new Anthos for telecom offering will help operators better serve the customers with the rollout of 5G. And telcos will be able to rapidly enable a global distributed edge by lighting up thousands of locations that are already deployed in the networks.

As part of the announcement, the Google Cloud has announced a collaboration with AT&T to enable 5G edge solutions for key industries like retail, manufacturing, and transportation, as well as a number of other partnerships.

ADAM GLICK: Google Cloud has added a service level agreement to GKE. The new financially backed SLA with an industry leading 99.95% for regional clusters, or 99.5% for zonal clusters, is supported by a $0.10 per hour cluster management fee. Customers will get their first zonal cluster without the management fee. And Anthos customers are not charged.

CRAIG BOX: After this pricing change, DevOps Directive wrote a post discussing the price differences of various managed Kubernetes services. There is no single takeaway, but services that don't charge for cluster management start at cheapest. Cluster pricing is dominated by compute and network costs, and usage discounts vary. Google Cloud was the cheapest of the big three clouds when running 12 or more VCPUs in a cluster and cheaper than all surveyed after 50 VCPUs.

ADAM GLICK: HPE announced the general availability of their container platform this week, focusing heavily on their bare metal offering, making a less than subtle jab at their competition. They called out the VTAX. People pay with the use of a hypervisor. HPE also announced that they will be providing professional services for customers, as well as reference designs, for use cases that include AI, data analysis, Edge, and IoT.

CRAIG BOX: VMware's open source toolkit gains two new releases this week. Ingress-controller Contour 1.2 now includes support for certificate rotation and better support for Envoy rollouts. Backup and restore tool Velero, formerly Heptio Ark, releases 1.3, with improvements to COD backups and restores and support for ARM and Power PC architectures.

ADAM GLICK: This week brings us two new case studies. First up, food delivery company HelloFresh published the first part in a series on what they learned running Istio in production. The second, academic research sharing company Kudos talks about their migration to Kubernetes via GKE and how they were able to trivially add SSL termination to their app using Istio.

CRAIG BOX: A security evaluation from Jack Leadford at NCC Group has looked at Istio's network security features. No major holes were found, but a couple of areas where things don't work the way you might assume were highlighted. Leadford points out that a combination of Kubernetes network policy and Istio security control is the best way to secure your internal workloads.

ADAM GLICK: TIKV, the distributed transactional key value database, has announced that they have just passed a third- party security audit, funded by the CNCF. The audit team from Cure 53 stated that the static code analysis did not reveal any significant problems and summarized the stack as mature. Issues were identified with the use of outdated libraries that contain vulnerabilities and the TIKV team has set up a tracking bug to address this.

CRAIG BOX: AWS launched the Firecracker micro VM in November 2018 and presented a paper on the topic at the recent USENIX Network System Design and Integration Conference. Adrian Colyer summarizes papers in computer science and this week, looked at the Firecracker paper. His write-up is a great place to start before diving into the paper itself. He gives a quick overview as to the two problems AWS was looking to solve-- strong isolation and low resource overhead, which equates to fast startup times.

In other AWS news, EKS now supports the AWS encryption provider, which lets you envelope, encrypt Kubernetes secrets using a master key from the Amazon KMS.

ADAM GLICK: The CNCF has released the results of their annual community survey. Key takeaways include the growth of several CNCF projects into production use, including Kubernetes, Prometheus, and core DNS, which all saw substantial growth. Service meshes are also a popular topic, with 18% of organizations running one in production and 47% evaluating the use of one. Istio topped the list there. 41% of folks are using a serverless platform.

AWS Lambda is the most popular overall, with those installing their own serverless platform, preferring Knative. The number of people planning to use a serverless platform actually dropped this year, from 25% to 20%, possibly showing a leveling off of the desire to use a serverless platform at this time. Most organizations are now using public clouds for their Kubernetes clusters. And the two leading providers were Google Cloud's GKE and AWS's EKS.

Interestingly enough, release cycles continue to shorten. Daily releases saw the most gain, coming in slightly behind the weekly cadence. Ad hoc is up and monthly is down. Fully managed deployments dropped, but so did fully automated ones, as hybrid deployments jumped 16% to become the most common way organizations are shipping their bits. Finally, 84% of organizations now report the use of containers in production, an increase from 73% in 2018. Dev and test usage remains stable around 90% since 2016.

CRAIG BOX: Finally, two updates on the news from last week. The enhancement to market container as a sidecar will not make it into 1.18 after all, as changes to part and container lifecycle are hard to review and notorious for having edge cases. This feature, originally slated for 1.15, now looks likely to land in 1.19.

ADAM GLICK: One day after our last episode, the CNCF announced that due to the ongoing spread of the coronavirus, KubeCon EU will be delayed until July or August. If you are registered to attend, you should have received an email last week with the details, a link to their web page talking about the change, and information about moving your existing bookings or requesting a refund if you won't be able to make the later date.

KubeCon China is also canceled outright for this year. We're glad the CNCF is prioritizing everyone's health, and we look forward to having our traditional KubeCon meet-up and meeting all of you when the event is rescheduled.

CRAIG BOX: And that's the news.

ADAM GLICK: Richard Belleville is a software engineer with Google Cloud and a core contributor to gRPC, Google's RPC framework that was donated to the CNCF as an incubation project. Welcome to the show, Richard.

RICHARD BELLEVILLE: Thank you. It's great to be here.

ADAM GLICK: First off, what is gRPC?

RICHARD BELLEVILLE: gRPC is a modern RPC framework. I think for a lot of people, they're more familiar with messaging frameworks, so it might be helpful to define first what an RPC actually is. RPC stands for remote procedure call, and it's sort of an old idea. You can find some really old RFCs, where when distributed systems were first being built, they had the idea of invoking a function remotely on a different machine.

And they want it to look, from the perspective of the programmer, as similar as possible to invoking a function on the local machine. That's the basic idea behind RPC. There's a lot of complexity behind it because, as we know, it's not as simple to make something happen on a separate machine as it is on the local machine. But that's the basic idea.

ADAM GLICK: And I assume that the g in gRPC stands for Google?

RICHARD BELLEVILLE: It does not! It actually changes from release to release. So I'm the release manager for 1.28, and the release manager before me picked the name Galactic. And I picked Gringotts for 1.29.

ADAM GLICK: That's awesome.

RICHARD BELLEVILLE: Yeah. There is a document that you can go through and you can see all of the different G words that we've picked, but every time you bump the version, you also bump the G word.

ADAM GLICK: Interesting. So it never has had an official definition. It's just the G is G, and each time, it changes.

RICHARD BELLEVILLE: Yeah, exactly. It's kind of like the SAT, right? It doesn't stand for anything. I think originally it was, like, the Standardized Assessment Test, something like that.

ADAM GLICK: I thought it was Standard Aptitude Test.


ADAM GLICK: It's been many, many years since I've taken that test, so I don't know. You joined the gRPC team about a year and a half ago, correct?


ADAM GLICK: What have you done previously?

RICHARD BELLEVILLE: I was working at a company called Ad Tran. We worked on edge networking stuff. I was on a platform team there. So I worked on a messaging system, somewhat similar to gRPC. And then we built a container orchestrator a little bit before Kubernetes was starting to really get going, and eventually worked on a system that enabled people to turn up internet service remotely without actually rolling a truck.

ADAM GLICK: And then what led you to come to Google and work on gRPC?

RICHARD BELLEVILLE: I found the gRPC project really interesting. Actually at my last job, somebody gave a tech talk about gRPC Python, of which I am now 50% of the maintainers. It was interesting then. And in college, I had already been familiar with protocol buffers, which are a key component that I'm sure we'll talk about in just a little bit within gRPC, and had used those to build sort of a messaging protocol between laptops, browsers, and autonomous robots.

So I was already somewhat familiar with the concept behind gRPC, and I was really interested in the idea of applying that at large scales, both within Google and outside of Google. The idea of working on an open source project was also really interesting to me. I liked being able to contribute to the broader technical community.

ADAM GLICK: You mentioned autonomous robots. I have to ask, have you ever taken part in the DARPA challenge?

RICHARD BELLEVILLE: Not the DARPA challenge, the NASA robotic mining challenge was what we competed in. And we got first that year.

ADAM GLICK: Congratulations.


ADAM GLICK: You mentioned that a protocol buffer was a core part of this. What is a protocol buffer?

RICHARD BELLEVILLE: Protocol buffers pre-date gRPC by a whole lot. I believe they came around 2004. The original author was Sanjay Ghemawat, who's still working here, just at his 20th Googleversary. And protocol buffers are basically a way to serialize structured data, OK?

So a lot of people are probably familiar with the idea of communicating using JSON between services. So in comparison to JSON, protocol buffers are binary on the wire. They're packed, which means they take up fewer bits on the wire. And there are language bindings that allow you to interact with them in a natural way in pretty much any language you'd like to use.

It is an independent project from gRPC, but the vast majority of the time if you are using gRPC, you will also be using protocol buffers as the serialization format to communicate between your services. These things are very well optimized. For a long time, we worked with the protocol buffers team, and they spent a lot of time thinking about very tiny optimizations at the assembly level that will make your services go faster.

And so if you have a different serialization format, odds are, you're not going to be able to make it as performant as protocol buffers are right now. Protocol buffers are actually really interesting within the context of Google because Google uses them for basically all communication between services.

So there are different sayings that we have about protocol buffers, and one of them that I find really interesting is the more senior you get, the more time you spend dining and reviewing protocol buffers and the less time you spend implementing code. Because once you've defined the protocol buffers, the rest of the process of building your service out is more or less mechanical.

Right, so they look a lot like C struts. They let you say that I want an int. I want a string. I want these different types of data inside of my overall message and they're composeable. They don't allow you to specify your full protocol, right, because there are things that are important to your protocol.

Like, I want to send one request and then receive one response in lock step. Ping pong, ping pong. But you might also want ping ping, pong pong. That's the sort of thing that you couldn't encode in your protocol buffer. And so there is still some element of defining an API in terms of English as well.

ADAM GLICK: When you joined the team, you hadn't been there from the beginning. So what are some of the folklore and the things that people have been working on before you that you wanted to learn and that you picked up as you started working on the project?

RICHARD BELLEVILLE: I think a lot of people listening to the podcast are probably familiar with the idea that Kubernetes evolved from a project within Google called Borg, right? And so, Kubernetes came about as a way to create an open source version of Borg. The story of gRPC is very similar.

So Google has used a tool called Stubby for RPCs since-- I think Stubby 1 came around in 2005. And that tool has massive usage. We're talking on the order of, I think, 10 billion RPCs per second. I don't think I'm confabulating there. It is a massively, massively used piece of software. If you make a micro optimization in Stubby, it can have massive implications for the entire fleet at Google.

So the gRPC team is actually the Stubby team. I don't know what the story is for Kubernetes. I get the sense that maybe there was like a disconnect there between the Borg team and the Kubernetes team. The gRPC team is the Stubby team. Like, I have also been maintainer for Stubby Python, as well as for gRPC Python. So we carry the wisdom and the weight of having maintained Stubby for so long.

And so the knowledge that went into building Stubby is the same knowledge that went into building gRPC. There are a lot of concurrency and parallelism experts on the team. We've got several former professors who work on the team and are really good natured and a lot of fun to work with. And I have learned a ton on that front since I came here. I absolutely think that being on the team has made me a better programmer by far.

ADAM GLICK: For gRPC, you mentioned a little bit earlier about the way that people are basically making remote procedure calls and that by using binary you've been able to basically maximize the throughput that you have on the wire, that you're minimizing the amount of bits that you have to send in order to send data back and forth. What was the problem that was being solved back then? I mean, it's certainly not the first protocol that some organization has created or that's available that people could call things across the network.

RICHARD BELLEVILLE: gRPC came about in about 2015. And so the idea back then, right around the same time as Kubernetes, was to build an open source version, or at least, to have an open source answer for RPC. And so one of the alternatives that was considered open sourcing Stubby. Google is interesting in that the code that you write here is pretty dissimilar from what you might write elsewhere.

So I don't think it's a secret that Google is mostly a C++ shop, at least internally. So if somebody just talks about a service without a language propended to it, odds are they mean the C++ version of that service, right? So when people talk about Stubby, they're talking about C++ Stubby. When they talk about Java Stubby, they mean Java Stubby, right?


RICHARD BELLEVILLE: So Stubby number one was C++ centric. And the problem is that we have a lot of libraries in Google that were not open sourced, things that sort of extend the standard template library. Actually, a lot of those have recently been released as an open source project called Abseil, which we've integrated into gRPC.

But at the time, it wasn't feasible to take that whole mass of C++ code and bring it out into the open, but it certainly was considered. There are other dependencies within Stubby like Chubby, which were not easily removable. But it was definitely something that was considered.

So there are other RPC systems out there. But we found that they didn't match up with our ideas, one, of what an RPC system needed to perform like. And two, we wanted to have control over a migration path. So we needed things to be sufficiently pluggable that we wouldn't cause any downtime by migrating services from Stubby to gRPC. And so we have really good systems in there that do allow us to do that, to, for example, serve Stubby and gRPC from the same process.

ADAM GLICK: Given how critical this is to everything that's going on within Google, why did Google decide to make this open source?

RICHARD BELLEVILLE: This was under cloud at the time. So one of the main things that we wanted was to enable performant access to Google Cloud Services, right? So one of the main things that I think gRPC does a lot better than other RPC or messaging systems out there is full bi-directional streaming.

So we're built on the HTTP protocol, which goes back and solves some of the pain points with HTTP One. One of those is head of line blocking, and it really does enable our full streaming support. So as I mentioned before, you can describe RPCs in terms of ping pongs, right? Ping is a request. Pong is a response. A lot of systems just allow you to do ping pong, period, end of story.

However, gRPC allows you to do any arbitrary interleaving of pings and pongs if you want to. We call arbitrary ordering bi-directional streaming. We have full support for what we call unary RPCs, which is just ping pong. And that probably is the majority of the use cases. But when you do need streaming, you really do need it.

We've seen a lot of hacks out there that enables something akin to bidirectional streaming, but is so much less performant and makes you subject to the idiosyncrasies of intermediary networking elements between you and the service that you're trying to get to. Things like long polling are not necessary within gRPC. And so these were the things that we wanted to bring to open source users.

ADAM GLICK: You mentioned that this also came from the internal project Stubby and that the Stubby team is alive and well working on things. What's unique about gRPC in comparison to, say, a hypothetical closed source, Stubby v4, inside Google?

RICHARD BELLEVILLE: Stubby v4 is actually not hypothetical. It has a fair few users now. Stubby v4 was sort of a redo of the Stubby 2 API to work on fibers, which Google has definitely talked about in the open before. They're sort of inspired by the Golang concurrency model that fundamentally allows you to write something that is blocking, instead of writing more complex asynchronous code.

There were a couple performance issues there that meant it wasn't quite as popular in the long term. But the reason fundamentally that we pivoted to focus on gRPC was because of the value of delivering an open source project.

So in terms of differences between those two, Stubby v4 was very C++ centric, right? So it doesn't have the multi-language multiplatform support that gRPC does, right? We'll run on a lot of different platforms in almost every language that you could pick out of a hat. Stubby v4 certainly wouldn't have done that without significant effort.

It would also mean that you're dependent on a new enough kernel. gRPC tries to scale to what you have available on your system. So we have different polling engines, right? We will run on e poll. We will run on poll, lots of different polling mechanisms there. And that wouldn't necessarily work the same way for a hypothetical open source Stubby v4.

ADAM GLICK: Earlier, you mentioned that gRPC was really a way to make calls across the network, and we talked about there are other ways that people might do that. Some of the common ones that people might think about are just kind of people making Ajax calls using JSON these days, or even rolling back to the early days of my development career, kind of soap and XML.


ADAM GLICK: And so how is gRPC different than those other ways of making cross network asynchronous calls?

RICHARD BELLEVILLE: I think I've touched on the main elements of this so far, but the two things that I really want to focus on here are a strong interface definition language-- protocol buffers-- and full support for bi-directional streaming. So let me dig into that a little bit.

The idea of an interface definition language is you have a strong contract between your client and your server for how they will communicate with one another. So the most popular communication solution right now is sort of HTTP JSON REST. And there are tools out there like Swagger that allow you to schema it. But it's sort of an afterthought, right?

The tools are a little bit hard to work with sometimes. Protobuf comes first. You write your .proto file first. And then code is generated after that. You get good backwards compatibility solutions from protocol buffers, and it is binary on the wire. There's tooling out there that helps you work with that as a human being as well. Because I, as a human being, don't do a good job of looking at a binary representation of things. So there's tooling that will help you work with that as well.

So bi-directional streaming I've also touched on. I think there also are some other RPC systems out there that will do that for you. But HTTP 1, JSON REST, which is most popular, certainly will not. Things like long polling are not necessary once you can just do a stream of responses back. And so that becomes really useful for things like large files for long waved connections. And you would have to have a much more complex resource model in your service in order to enable such a thing with just plain old REST.

ADAM GLICK: So you talked about Swagger, which I think sometimes now is referred to as the open APIs. Is there a similarity between the two in terms of how you define the API of what you're calling? Is gRPC open API compliant, or are they really separate projects?

RICHARD BELLEVILLE: They're definitely separate projects. I can't say that I'm an expert on open API. I have used it a little bit. But definitely, I think that people tend to define their API as an afterthought. And it's defined more by the implementation than by the API specification. With protocol buffers, it tends to be the other way around.

ADAM GLICK: Do you have to use protocol buffers to use gRPC?

RICHARD BELLEVILLE: So you don't. Something that I don't think I've touched on yet is that gRPC is very, very pluggable, right? So in order to run in both Google data centers and in any arbitrary open source environment, we had to provide hooks that allow you to extend it as much as you want.

So one of those things that is pluggable is the interface definition language. As far as the core of gRPC is concerned, you are sending bytes on the wire. The additional layer that we add on top of that is code generation that integrates with the protocol buffer compiler that interacts with the byte-level API of gRPC to do the encoding for you, to behave like the API that you've defined in your dot proto.

Other things that are pluggable within gRPC-- authentication, load balancing, name resolution, in some languages, compression. And we found that to be really useful. I've seen some cool things like in gRPC Java, people have used Zookeeper for name resolution. And you can be pushed updates for things like that, which I think is a pretty good improvement over DNS.

ADAM GLICK: Interesting. Could I use JSON or plain text as the content body if I wanted?

RICHARD BELLEVILLE: Absolutely. So we have a blog post from I think a couple of years ago at this point, where a former co-worker, Carl Mastrangelo, built out JSON integration with gRPC. And so you can follow through that. You could even integrate with your own code gen if you needed to, to add another layer of syntactic sugar on top of things.

ADAM GLICK: I love that, syntactic sugar. Earlier on, you mentioned that gRPC had a connection to the HTTP/2 definition. How are those two related?

RICHARD BELLEVILLE: As I said, gRPC is highly pluggable, right? So in most language implementations, you actually can swap out the transport, right? So HTTP/2 is the default, and that's what gRPC was launched with. As gRPC was being developed, the HTTP/2 spec was actively evolving.

And so there would be points where the team would receive a new version of the spec. They'd have to figure out what had changed and how the code base needed to change in order to cope with it. But we do support transports like Cronet. We'll do OK HTTP, things like that. And you could potentially build your own.

ADAM GLICK: You spoke earlier about the fact that a lot of code within Google is written in C++. But obviously, when you think about the open ecosystem, there are lots of different languages that people are using. You also mentioned there was a Java version. You mentioned you worked on Python. How does the gRPC team face the challenge of supporting such a wide array of languages that developers are using today?

RICHARD BELLEVILLE: One part of the answer to that question is just manpower. We throw a lot of people at the problem. But that doesn't completely scale. So let me start by giving you an abbreviated list of the languages that we support-- C++, Java, Go, Python, Ruby, PHP, C#. I think those are all of the ones that we maintain in-house. I don't think I've missed anything there.

But there are also community driven language bindings like Rust and Haskell and Swift. I missed Objective C. We do Objective C in-house as well. OK, so it takes a lot of different expertise to make that happen. Part of our answer to saving on effort there is something that we've traditionally called C Core, but we're now calling Core because it's implemented in C++ now. And a lot of languages wrap on top of that.

So Python, for example, is a C extension, where inside of Linux, the shared object library is gRPC Core, which is maintained by the C++ team. And so the intent is for those wrapped languages to just have a thin layer that gives you an idiomatic API for that language on top of the highly performant C++ Core. And so the languages that do that are Python and Ruby, PHP, C#. And so we actually maintain three stacks in parallel-- the Core based ones, Go and Java have their own implementations.

ADAM GLICK: And the wrapping seems like it's for, in most cases, interpreted languages, although you did mention .NET, which it does get compiled through.

RICHARD BELLEVILLE: There are actually two implementations for C#. And Microsoft has been working on the native .NET implementation. The traditional C# bindings have been a C extension as well.

ADAM GLICK: OK, so a wrapper. Interesting. It's great to hear about the broad language support that gRPC is receiving. What about the different platforms that it can run on?

RICHARD BELLEVILLE: This is actually a pain point for us, but I think point of joy for people who use us. We will run on Mac, Linux, Windows. And we will run on ARM. We will run on x86. And we put a lot of effort into testing our system on all of these different platforms. In addition to just the platforms that we support natively, I've seen there are BSD ports for us. That's maintained by somebody downstream from us with a vendored fork. But certainly on odd platforms, you will see gRPC running relatively well.

ADAM GLICK: One of the things that you didn't mention was JavaScript. Can you make a gRPC call from a web browser?

RICHARD BELLEVILLE: We do maintain JavaScript in-house. I'm sorry, it's just such a long list. So let's focus on two sides to JavaScript, right? Because you've got JavaScript on the server side, and then you've got JavaScript in the browser. JavaScript on the server side is also a wrapped language wrapping the C extension.

JavaScript in the browser is a slightly different story. Because it runs in the browser, right? So it is fundamentally limited by what the browser is capable of doing. Browsers have, since around 2015, started to implement support for HTTP/2, and it's relatively well supported, but there's one feature of HTTP/2 that was not included in basically any browser implementation. And that's trailers, which are important to the gRPC protocol.

And so what that makes difficult is client streaming. So we've implemented a project called gRPC Web, which is a different wire protocol that captures a subset of the full semantics of the full gRPC protocol. So it'll allow you to make gRPC calls of certain arities from the browser to your native gRPC services running in a data center by using an intermediary gateway proxy, which has been around for years.


RICHARD BELLEVILLE: And it's relatively simple to use.

ADAM GLICK: Since it's been moved into the CNCF and it's in incubation phase, how has adoption been of gRPC?

RICHARD BELLEVILLE: I can certainly say that I knew about it and was relatively familiar with it before I even joined Google, right? I had a choice of multiple teams, and gRPC was the one that I chose to go with. I have seen lots of different companies adopting gRPC within their data centers. We've got a list of them on our website. Definitely Netflix. We've got CoreOS, Square, Cisco, Juniper. Lots of different places are using it pervasively in their infrastructure.

I've seen Juniper has a really interesting system where they've got microservices running on network elements. And they all communicate using gRPC. And I was using that system before I even worked here. We're seeing greater and greater adoption among just smaller open source projects. I think it's certainly very popular with big enterprises. But smaller organizations are now also starting to adopt it as well.

ADAM GLICK: It's great to see people adopting it. What has been the change in terms of contributors to the project? I know it started as a Google project, and so it was kind of another one of these open source single company projects. Is it still a single company project?

RICHARD BELLEVILLE: There certainly is a lot of contribution from Google still, but we've got contributions from Uber, Lyft, Skyscanner. So there definitely are other companies contributing to gRPC.

ADAM GLICK: What remains to be done for gRPC to graduate in the CNCF?

RICHARD BELLEVILLE: Don't quote me on it. I believe that it's currently more or less a matter of paperwork. I think the proposal has been submitted. And we're just waiting on the status for that.

ADAM GLICK: I wish you good luck with that piece. What comes next for gRPC?

RICHARD BELLEVILLE: Number one priority is definitely just maintaining what we've already built, keeping the lights on. Like, we think we've delivered pretty well on producing a performant, extensible RPC system. And we want to make sure that that continues to meet people's needs. And so there are a lot of language specific efforts that are being undertaken to make that happen.

So just to name off a few, there is a pure Node implementation that's being worked on because as I said, Node is a C extension, a wrapper of Core. And it turns out Node developers aren't very familiar with these compiled binaries that have to be installed on your system.

In Python, we're working on async IO support, so that you don't have to use multi threading to get good concurrency. And that'll improve performance and hopefully be a little bit more idiomatic for modern Python developers. We're working on usability improvements across the entire set of languages.

I mentioned C#. There's been a contribution from Microsoft of an alternate .NET implementation that, like JavaScript, doesn't require the inclusion of Core. Lots of different improvements happening across the whole set of languages.

ADAM GLICK: That's great to hear. Thanks for joining us today, Richard.

RICHARD BELLEVILLE: Yeah, thank you.

ADAM GLICK: You can find Richard Belleville on Twitter at @gnossen, and you can find the gRPC Project at grpc.io.


CRAIG BOX: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find this on Twitter at @kubernetespod, or reach us by email at kubernetespodcast@google.com.

ADAM GLICK: You can always check out our website at kubernetespodcast.com, where you'll find transcripts and show notes. Until next time, take care.

CRAIG BOX: See you next week.