#155 July 23, 2021

Software Supply Chain Security, with Priya Wadhwa

Hosts: Craig Box, Jimmy Moore

The idea of software supply chain security rocketed into the public consciousness in the last year, with the news that US government agencies had been breached. Priya Wadhwa is a software engineer at Google working on open source security, including projects to secure and verify container deployments. She outlines what is being done to make sure this doesn’t happen to you.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Craig Box.


CRAIG BOX: There are two types of people: those that would pay whatever it costs to go into space, and those for whom there's not enough money in the world that you could pay them to go to space. Which are you, Jimmy?

JIMMY MOORE: You know, Craig, I've been watching the news, and I just think about growing up in Florida in the space shuttle era. And I always dreamed of going to Space Camp or going up in the shuttle, going into space. But I have a few thoughts about this now.

CRAIG BOX: Please share them with the group.

JIMMY MOORE: Well, now I'm a granola living in San Francisco, so I have a few more priorities for humanity before spending that kind of money.

CRAIG BOX: Do you think that the argument that this is just the start and that eventually there will be trickle-down benefits for the rest of the world, do you think that holds water?

JIMMY MOORE: It sounds very familiar and very American, but I'm not sure I'm signing on to that one.

CRAIG BOX: You've really got to look at the cost per minute when you think about these things. Jeff Bezos's rocket ride was 11 minutes end to end with three minutes of weightlessness. I think I'd sign up if you could have an hour in space, but three to four minutes-- not that I can afford it, obviously, but it's a tough economic decision.

JIMMY MOORE: It's a definite flex, and I'll say this. I have two things about this. Three to four minutes of weightlessness after a bumpy ride on capitalism's phallus doesn't seem worth it to me.

CRAIG BOX: There were a lot of Dr. Evil comparisons made. I do remember watching "Austin Powers" in the theater many years ago, and there's a very funny scene which, if you haven't seen, we'll have a link to a version of it in the show notes. Many of them were going around Twitter this week.

JIMMY MOORE: Whoever designed that rocket had to know. I mean, they had to know, right? That's a thing.

CRAIG BOX: How do you feel, then, about the Branson approach, where we have a plane launched off the back of another plane?

JIMMY MOORE: Actually, I thought that was really efficient and elegant, and I said, right, this is the future of space travel. That actually felt really cool.

CRAIG BOX: But you said that in 2004 when Burt Rutan first invented it?

JIMMY MOORE: Oh, I must have missed that one. I was out of the country.

CRAIG BOX: I don't know if you remember, but there was a contest called the XPRIZE 17 years ago, which was basically a million dollars to the first people who could go to the edge of space twice within a two-week period or something like that. And an aircraft designer called Burt Rutan won that contest with something that looks very much like what Branson has now. And then Virgin invested in it and said, we'll make a product. And they took a long time, but credit to them, they did get there.

JIMMY MOORE: I'll say that I am impressed with the flex. These men are definitely showing their power, but my version of that is just getting a little more guacamole at the local taqueria. So I can relate.

CRAIG BOX: I got double salmon in my poke bowl the other day. That was a treat. Poke bowls in Britain are definitely nothing on Hawaii, or even continental America, for that matter.

JIMMY MOORE: Well I'm going to start calling you Mr. Branson, then.

CRAIG BOX: Or Mr. Powers. Danger is my middle name.

JIMMY MOORE: Shall we get to the news?

CRAIG BOX: Let's get to the news.


CRAIG BOX: If this week's episode whets your appetite to learn more about secure software supply chain, Google Cloud is hosting a webinar on container security featuring talks from several previous podcast guests and all-around good people. You can watch it live with Q&A next Thursday, July 29, or catch up on demand afterwards. Google Cloud also announced that registrations for Google Cloud Next 2021 are now open. The event is to be held October 12th to 14th and is fully virtual, which will come in handy as someone went and scheduled a KubeCon at the same time. Both events are completely free to attend, and links to sign up can be found in the show notes.

JIMMY MOORE: GCP this week has added a new cloud intrusion detection service and preview. It provides broad visibility into traffic coming into your cloud environment, as well as East/West VPC traffic from GCE or GKE endpoints, looking for malware, spyware, command and control attacks, and other network-based threats. It also detects exploits and evasion attempts at both network and application layers. Cloud IDS integrates Palo Alto Networks' threat detection technologies as a full first-party Google service. Google also added Windows Server container support to Anthos clusters running on VMware. And multi-cluster ingress, a feature previously only available to Anthos customers, can now be used by GKE customers without an Anthos subscription.

CRAIG BOX: Carrying on the security theme, an exploit was discovered in the netfilter system in Linux which can bypass all modern security mitigations and achieve kernel code execution. A write-up by Google information security engineer Andy Nguyen explains the CVE in great depth, including how it was discovered in the context of a Kubernetes capture the flag escape. Vendors are publishing advisories this week, but the bug was fixed in the kernel in April.

JIMMY MOORE: A security advisory for Kubernetes. If you have access to edit endpoints, then you can expose certain pods to the internet, even if other security controls would prevent you. This is another one of those "part of how Kubernetes works" bugs, but a workaround is provided while the design phase starts on a proper fix.

CRAIG BOX: A security advisory for Helm. If a username and password were provided to fetch a chart and that chart referenced a second chart, the first username and password may have also been sent to the second server. The behavior has been fixed in Helm 3.6.1, with the old mode now behind a flag in case you actually needed it to do that.

JIMMY MOORE: Security company Intezer has detected a new attack on Kubernetes clusters running Argo workflows. Misconfigured dashboards allow internet attackers to drop a request for crypto mining into a cluster. In one case, the cluster had been mining Monero for nine months before anyone noticed. You can check your Argo endpoint from outside your cluster, or Intezer's software will detect and alert this kind of thing for you.

CRAIG BOX: Rounding out the security section and linking last week's episode to this week's, Sysdig has acquired Apolicy. As the name would suggest, Apolicy is an infrastructure-as-code tool for applying policy across your pipeline based on the Open Policy Agent. Terms of the deal were not disclosed, but I happen to know that Apolicy's CEO Maor listens to the show, so hello and congratulations from us.

JIMMY MOORE: Cockroach Labs has announced general availability of their database operator for Kubernetes. The CockroachDB operator is based on learnings from their cloud service and handles scaling, redundancy, and automation of deployment and upgrades.

CRAIG BOX: Cloudflare had a problem with Kubernetes nodes failing, and after extending the node problem detection to properly detect their node problems, decided they wanted something else. Sciuro, with a silent C, is the Latin root word for "squirrel" and the name of Cloudflare's solution. It uses Prometheus's Alertmanager to set node conditions rather than requiring something running in-band on the node. The nodes can then be cured using Kured, with a K.

JIMMY MOORE: Finally, if you want to learn more about operators, the CNCF App Delivery TAG, formerly SIG, has published a white paper. It defines and provides a comprehensive guide to cloud-native operators, outlining patterns, recommended configuration, and use cases. It also offers best practices, including advice if you want to write your own operator.

CRAIG BOX: And that's the news.


CRAIG BOX: Priya Wadhwa is a software engineer on the Google Open Source security team. She works on supply chain security, focusing on Sigstore and Tekton. Welcome to the show, Priya.

PRIYA WADHWA: Thanks for having me, Craig.

CRAIG BOX: Your first job at Google was working on developer tools, things like Minikube, Skaffold, and Kaniko. Was security a big concern while working on those tools?

PRIYA WADHWA: Interestingly enough, we had always talked about how it was probably something that we should be doing more, just because there were all these binaries that we were releasing that a lot of people were running. But to be honest, I was so focused on responding to issues and adding new features and the day-to-day of being a maintainer of an open source project that I never really had a chance to focus on security.

CRAIG BOX: Perhaps not so much about the security of the software that you were releasing yourself, but in the way that people would use your tools to build things that then they would want to be secure. Were you ever getting feature requests in relation to that?

PRIYA WADHWA: Honestly, I don't remember it being a huge topic at all. Mostly we would get requests for features or for bug fixes. That was pretty much it. When supply chain security became a bigger conversation in recent months, that's when I also, like many people, started to think about it and started to realize that I had actually been really lucky with the tools that I had been working on that nothing really bad had happened yet. And there was a lot of room for improvement, that, ideally, I was looking for an easy way to add more security to the tools that I was working on.

CRAIG BOX: I think that's true of a lot of things, especially related to the internet. These things start out as very academic things that are very open and people are very communicative, and then as they become more popular, people start seeing the opportunity for abuse and for security malfeasance and so on. And then all of a sudden there's a tipping point in the consciousness where it becomes something that's very important.

PRIYA WADHWA: Definitely. I think the conversation is really important because, obviously, prevention is--

CRAIG BOX: Mightier than the sword, I think, as the expression goes.

PRIYA WADHWA: Exactly. Prevention is preferable to a reaction. Ideally, security would be so easy to do that everyone would just be able to do it without having to put too much effort in, and a lot of potential attacks could be mitigated that way.

CRAIG BOX: I hear a couple of different descriptions of what a software supply chain is. If you think about the physical world supply chain, then you might think about the dependencies of your application, the compilers and the libraries that you bring in. Alternatively, you could think about it in terms of all the things required to take your app from code all the way into production, through compiling and linking and building binaries and so on. How do you think of software supply chain?

PRIYA WADHWA: I think of it as a combination of what you described. I guess a simple way to put it would be, it's the journey from the source code you write all the way to production or that piece of software ending up in a user's hands for them to use. So at the very beginning, it could include anything like source code, dependencies, build configuration, and then every step along in the process to actually getting it out to the person who is going to be using it.

CRAIG BOX: If we think about some places where people may have come across security in the technology that they use, perhaps you think about the App Store on your phone. And you're downloading an application which was built by someone, and they had to sign it themselves and upload it to a store, and there's some proof all the way through that what you're getting was approved and signed in such a way that the phone can only run things that were signed by the vendor. Is that an example of software supply chain?

PRIYA WADHWA: I think that is an example of software supply chain. One of the ideas behind software supply chain is that the end user should be able to verify the software pretty much from the root of where it was created, ideally, and be able to verify that the software is coming from the person or entity that they expect that it should be coming from.

CRAIG BOX: Another thing which has been discussed recently, especially in Linux circles, is the idea of reproducible builds, is being able to take a set of source and then make sure that the binaries that are provided by your packaging system, you can prove that you can build them from that same source. Is this related?

PRIYA WADHWA: I think this is related. Verifiable builds, or reproducible builds, can provide security in a couple of different ways. On one hand, when you're actually building your reproducible software, you could build it multiple times in multiple different environments and make sure that you get the same thing every single time. And it's one way to ensure that your build system itself has not been compromised. And the other nice thing about reproducible builds is in the open source context, I could build something and someone who is using my software could build the same thing as well and verify that what I'm giving them is also what they expect to be receiving.

CRAIG BOX: A way that all of this has reached people's consciousness recently is through the SolarWinds hack. And there has been a lot of response, especially from the American federal government now, to say, we don't want this kind of thing to happen again. There was an executive order recently from President Biden to say that this is something now that government agencies need to think about. What happened in the SolarWinds hack?

PRIYA WADHWA: In the SolarWinds hack, the attacker was able to gain access to the build platform that SolarWinds was actually building their software on. And on this build platform, they installed a malware service which would inject malicious code into the final SolarWinds software that they were distributing to their customers. And it's an interesting hack because a lot of common mitigation techniques might not have caught it because the bad code was being injected in the build system itself. It didn't exist in source control. And so ordinarily, you have reviewers looking at your code before it's merged, so because the code wasn't in source control itself, that wasn't a way to catch potentially bad code. Because the code was being injected during build time, the final product was signed by SolarWinds. And so to people who were receiving that software, it looked perfectly legitimate.

The other interesting thing is that, typically when you update your software dependency, you expect vulnerabilities to have been fixed if there were any. But in this case, updating the SolarWinds software was how the malware was distributed. This is actually a place where the reproducible builds could have come in handy because if they had had multiple build environments and were building the same piece of software in multiple locations, they could have recognized that one of the build systems was compromised and generating something that was different to what all the other ones were generating.

CRAIG BOX: Is that something that will become standard advice for people is, for their risk profile, they should run builds in different places in the same way that someone might want to run cloud services in different regions?

PRIYA WADHWA: I don't know how feasible. It is for people to do it, but I think that it would definitely be an additional layer of security that has a lot of advantages.

CRAIG BOX: Now, one of the things we've talked about at Google Cloud is the concept of binary authorization, and that's effectively a way of saying you have to have a signed binary in order to be able to run this application on Borg, and there are GKE and Kubernetes equivalents. That wouldn't haven't actually done anything in this particular case, I understand.

PRIYA WADHWA: Yes. Because the final software would have been signed, it would have looked legitimate. It would have been run. Whoever was running it would have seen oh, it's been signed by SolarWinds and the verification checks out.

CRAIG BOX: So then people, I guess, are going to start saying, that's not enough. We need to have that. That's definitely a part of our security pipeline, but we now need to be shifting further left, to use the parlance.

PRIYA WADHWA: Yeah. There's so many different kinds of attacks, so you have to be doing a combination of different things to try and protect yourself as best as you can. I'm sure new things will come up all the time and new strategies to avoid them will have to come up too alongside.

CRAIG BOX: People talk about provenance in this particular case. Is that proving that something comes from a particular person by identity?

PRIYA WADHWA: Provenance, I would say, is more of a record of how something was built. The idea behind provenance is, if I build my software artifact, my provenance statement would include things like the Git repo that I pulled the code from, the specific commit that my artifact was built at, any environment variables I might have set, and the exact command that I ran to actually build the software. The idea behind provenance is to get as much information as I can about how a build was conducted so that if someone ever wanted to have more information about it, they could get it from the provenance. A lot of times people will generate provenance for their software and they will assign the provenance itself so that it can be proven that the provenance came from them.

CRAIG BOX: Right, so I guess the signature is where the identity comes in because in the art world, for example, you talk about the provenance and says, "well, someone saw Rembrandt paint this and then it was sold and then bought by these people" and so on. And if you can get that chain all the way back to the artist, then the art is worth a lot more money. But when we're talking about reproducibility and so on, it doesn't necessarily matter that Rembrandt was involved. It just matters that anyone knows the steps that were taken and could be taken by somebody else.

PRIYA WADHWA: Exactly. But the signature is still really important because, without the signature, there is no way of confirming that the provenance is correct.

CRAIG BOX: So this brings us to the in-toto project, which is in the CNCF. They talk about "farm-to-table" provenance for software.

PRIYA WADHWA: In-toto is a provenance format, and it basically focuses on providing provenance to help maintain supply chain integrity. The idea is that in-toto is uniquely defined to help contain any supply chain metadata you might want included as you're going through the supply chain.

CRAIG BOX: And those are the things you mentioned before like the environment variables and git commits and so on that we used?


CRAIG BOX: Is that a set of tooling that helps create and verify that metadata, or is it just a format in which you can describe it?

PRIYA WADHWA: It is just a format in which you can describe your supply chain build, but a lot of Sigstore tools have native support for the in-toto format to make it easy to store attestations in a specific place and also query for attestations with specific data inside them.

CRAIG BOX: Now, you mentioned Sigstore there. That is a project with the Linux Foundation to address a few of these software supply chain concerns. Can you talk a little bit about how Sigstore was started as a project?

PRIYA WADHWA: Sigstore is a project in the Linux Foundation. The main goal behind it is to basically make signing and verification for supply chain metadata really easy and transparent. The entire project is open source and free to use. It's kind of going back to what I had said before about my own personal journey. At the time that I was working on the open source Kubernetes developer tools, it isn't even just that security wasn't a huge part of our day-to-day conversation, but there was no easy way to do it. The goal of Sigstore is to fill that gap and make it easier to sign and verify software in general.

CRAIG BOX: There are a lot of tools in Sigstore that are related to containers. Is it a container-only project, or would you say it was containers-first?

PRIYA WADHWA: It was containers-first, but the goal is to support other types of software as well. I know that we have work going into supporting Wasm modules and adding in-toto attestation support to our transparency log. So I think we started with containers just because there was no really easy way to sign containers at first, so it seemed like an easy first step. But the goal of Sigstore is in general to make signing and verification easy for software in general.

CRAIG BOX: Now, that first step was a project called Cosign. I guess that's like the 'Sine' project, but 90 degrees offset?

PRIYA WADHWA: [LAUGHING] Yeah, exactly. It stands for Container Sign. It's pretty much exactly what it sounds like. It's a tool for signing and verifying containers. It can be used to sign other pieces of software too. That's pretty much it.

CRAIG BOX: I guess I'm surprised that's not something that was already built into the spec for OCI, for example.

PRIYA WADHWA: Yeah, it just speaks to how maybe supply chain security was not at the forefront of everyone's minds until recently.

CRAIG BOX: Where does Cosign fit into my build and release pipeline?

PRIYA WADHWA: Assuming you are building containers, Cosign could be used to verify the base images that you are building on top of and make sure they are coming from the source that you think they're coming from before you actually build your images. Once you've built your images, Cosign makes it easy to sign them so that any consumers of your image can verify that they are actually coming from you. It could be useful in terms of, if we want to make it specific to Kubernetes, maybe you want to verify all of the images in your deployments before you deploy them to a cluster. I think there's a tool called Connaisseur, which is basically a Kubernetes admission control that I was reading about the other day that leverages Cosign to verify images before allowing them into your cluster.

CRAIG BOX: So am I just effectively signing it at this point to say this is something that was built by me, or am I somehow relating it back to the provenance we discussed?

PRIYA WADHWA: So you're signing it to say that it was built by you. You could have provenance included to provide more information about how the image was built. But the signing itself is basically you saying, I built this thing and I am signing it, and you can verify that it came from me.

CRAIG BOX: How do I verify who you are in this particular case?

PRIYA WADHWA: Cosign supports just using a classic public/private key pair. So you generate your key pair, sign it with the private key, and then make your public key public. And people can verify it themselves if they want to. Using a couple of the other Sigstore tools and services like Rekor and Fulcio, which is our certificate authority, can make this even easier so people don't actually need the public key to verify your image.

CRAIG BOX: Now, a few weeks ago we talked to Dan Lorenc, who's on your team, and he was talking about a key signing ceremony to kick off the root of trust for presumably the Fulcio project and Sigstore as a whole. How did that process end up playing out?

PRIYA WADHWA: Oh, it was really fun. Did you get a chance to watch it?

CRAIG BOX: I didn't, but I hope that he wore a robe and wizard hat as I had been promised.

PRIYA WADHWA: That would have been awesome. But yeah, the root key signing ceremony went great. We did it on a live stream, had a good amount of viewers. I was one of them. People were verifying that the root keys were coming from the people they were supposed to be coming from and verifying the metadata as the process went on. And yeah, now we have five root key holders for the Sigstore project.

CRAIG BOX: So how do I go from those five trusted people to-- one of them presumably has to sign something on behalf of Docker, for example, and say, well, if Docker the company want to publish something, they're creating their own key and then it will be signed by one of those root people. And perhaps then they could have an authority which allows their customers to be signed by them.

PRIYA WADHWA: First of all, I think one of the benefits of having five root key holders is that a majority of them need to be present to be able to sign anything at all. So it would be probably at least three root key holders in this scenario. I think that we are working on making our root key available to other open source projects as well. I think if it was to be done today, it would have to be done manually. We would need an open source project to come to us and say, oh, we're interested in having our key sign your root key, and we would have to manually verify that it was legitimate. But I think there are plans to try and automate this process in the future.

CRAIG BOX: Speaking of automation, one of the things you work on is a product called Tekton Chains. How does that tie into Sigstore?

PRIYA WADHWA: First, I think we can talk about how it ties into Tekton because it was created with that in mind. A simple way of explaining Tekton is it's CI/CD for Kubernetes. The idea is, if you want to run your CI/CD system on your cluster, you can use Tekton and you can leverage common Kubernetes resources like Secrets and Volumes to build your final software artifact easily on cluster.

CRAIG BOX: So I have an object in Tekton called a "CI step", for example?

PRIYA WADHWA: Yeah, so, in Tekton, we have this concept of Pipelines and they execute different steps. One of those steps could be analogous to a step in your CI system.

CRAIG BOX: So where do Chains come into the picture?

PRIYA WADHWA: The basic idea of Chains is that it is supply chain security for Tekton. Say you've had your Tekton pipeline running for a while and it builds your OCI images for you. It pulls in your git repo, builds the image, and then uploads it to a registry. Tekton Chains was designed to run alongside Tekton and take care of all of the supply chain aspects of your Tekton build pipeline. It's still in early stages, but right now we do two things around supply chain. So the first one is that Tekton Chains will watch your Tekton pipeline and wait for it to build artifacts, and it will sign them for you. If your Tekton pipeline is already building an image and pushing it to registry, Tekton Chains will recognize it and sign your container image for you with whatever pre-configured keys you have set up.

The other nice thing that Tekton Chains does is it will create provenance for you using the in-toto attestation. So for each step that you have already running in your Tekton pipeline, Tekton Chains can create a corresponding in-toto attestation that will have all the information around what image was run, what command was run, and any other data people might want to know about how the final piece of software was actually built.

CRAIG BOX: Where is all that data stored, and how do I get access to it if I've got the name of a container, for example?

PRIYA WADHWA: In terms of Tekton Chains, the data can be stored in a location that you would configure. So you could store it in your GCS bucket or as an annotation on your Kubernetes object itself or wherever you want. If you want the information to be publicly available so that consumers of your image can find the attestation themselves, then that's where the Sigstore services come into play. So you could, in theory, store your attestation in Rekor, which is our publicly available transparency log, and then anybody could query the log in the future and see the attestation for the image that they are searching for.

CRAIG BOX: So far we've talked a lot about the process of building software which we presumably have the source code for. But a lot of the source code in people's software starts with "import this open source project" or "pull down this library from a repository." Now we start worrying about the bill of materials required to build software and the potential that there is not just problems in their own code, but in code that we may not even know that we're using. So how are you looking at the concepts and problems behind maintaining this bill of materials for open source software?

PRIYA WADHWA: For anyone who might not know, SBOM stands for software bill of materials. Basically, the idea behind them is that for a piece of software, the SBOM will contain a list of all of the dependencies and probably at what versions those dependencies were built into the final piece of software. Theoretically, this could be useful if, say, you are looking for a specific dependency that you know has a vulnerability in your software and you want to make sure that you're running a newer version or something. But I think that most SBOM generator tools are still in early stages. I don't think any of them are recommended for production use yet. The whole area is still early.

CRAIG BOX: Would that create an artifact that would eventually become part of the provenance of the binaries?

PRIYA WADHWA: I think it could be useful if you had an SBOM similar to provenance. It would detail all of the different dependencies that went into your final artifact, and if you were to sign that SBOM, it would lend some validity to it. But I think right now it's still unclear how to generate a correct SBOM for any given piece of software.

CRAIG BOX: A lot of interesting work has been happening in the space from Google lately. There is a project called Open Source Insights, or a dependency graph tool, which generates some really interesting pictures showing you might have a very small amount of your own code, but there's an iceberg sitting under the ocean that you sit on top of.

PRIYA WADHWA: Something that Dan, who I work with, likes to say, and I'm going to steal this from him--

CRAIG BOX: By all means.

PRIYA WADHWA: Is that if you find a thumb drive on the ground, you are not going to plug it into your computer and see what it is. But most people who are writing code will pull in open source dependencies without thoroughly checking the code that they're pulling in. And when you think about it, both of these things are not very safe, and yet one of them we would definitely not do and the other we do without really thinking about it.

CRAIG BOX: But again, that's the thing that's changed over time, because, for example, there was an alternate reality game around a Nine Inch Nails album, maybe in the early 2000s. They had all these hints to things hidden in pictures and so on, and I remember hearing that they put them in flash drives and left them in the bathroom at a concert or something. And people would take them and find them, and obviously, don't do this in production, but maybe that's a thing that people will think twice about doing these days.

PRIYA WADHWA: Yeah, exactly. This is where the Open Source Insights project comes in. It's unlikely that I will go through every single dependency, direct and indirect, that I pull into my project. The Open Source Insights project aims to do it for you, which is nice. The idea is you can go to the web page, I think it's deps.dev, and type in your open source library that you want to use, and it'll give you some information about basically security-related stuff like, oh, were CVEs found in this library, or does it appear to be safe? It's a way to take the work out of deciding if you should vendor something in or not.

CRAIG BOX: I have heard of cases where people have acquired access to an open source application and then gone and put something in secretly, so perhaps that's the thing that's not necessarily true to say this is safe, but it's to say this is safe at this time or at this particular version.

PRIYA WADHWA: Exactly. Nothing is 100% certain, but it's useful to just have a guide. The Open Source Insights project goes hand-in-hand with the Scorecards project, which is another open source project that my team works on. The idea behind Scorecards is that it will assign a security score to a GitHub repo based on a collection of metrics that have been decided upon. The same metrics are applied to every single GitHub repo, so it is standardized. And the idea is that you could go to a repo, look up the score, and it would help you decide if that repo is secure enough or you are comfortable using the tool from that repo or vendoring in the code in it.

CRAIG BOX: Another thing that was recently introduced by the Google Open Source team is SLSA, or Supply Chain Levels for Software Artifacts. What can you tell me about the SLSA project?

PRIYA WADHWA: The SLSA project is basically a framework around supply chain security. The levels range from SLSA 1 to SLSA 4, with SLSA 4 being the ideal and most secure setup for your supply chain. The idea behind it is it aims to protect you against the most widespread supply chain attacks, and so it's a set of standards that at each level will guarantee you a different level of security.

CRAIG BOX: No one thought to name the levels Mild, Medium, and Hot.

PRIYA WADHWA: That would have been great, Level 4, Extra Spicy.

CRAIG BOX: What kind of things change as you go up through the levels?

PRIYA WADHWA: Level 1 is the most basic level, and basically includes a couple of things, scripted build and that there is some sort of provenance generated for the software that you are building. As you increase levels, obviously there are more requirements. And so SLSA Level 4, which is the most difficult to achieve--

CRAIG BOX: I dare you to eat more than 10 in one sitting.

PRIYA WADHWA: Will require that things like your source is version controlled and that you have two people review every single PR before it's merged and that your build is hermetic, so it doesn't have network access while you're building it. And so a lot of these things may not even be possible right now with really common build systems that people are using for their software. So the SLSA Level 4 is the goal. The reason the lower levels are there is because it might not be feasible for everyone to achieve Level 4.

CRAIG BOX: And the Sigstore projects will obviously help provide some of those things that tick off the check-boxes as you move up those levels.

PRIYA WADHWA: Definitely.

CRAIG BOX: Does it seem fair, then, to think about a future where it's just on by default that everything you do is signed and the metadata and the provenance is published and available?

PRIYA WADHWA: For open source, it's definitely possible to get to a point where all the software that's being consumed is signed and there is provenance for everything. Obviously, it would really depend on if the tools could be easy to use, but also broad enough that most people would be able to apply them to their specific use case, which is always a tough line to walk. That would be the ideal outcome, that adoption is huge and everyone starts signing everything.

CRAIG BOX: Even when it comes to tooling, you might not want to use it. You just want to know that everything is good until something isn't, and if someone tries to sneak something in, then just because all of the infrastructure is there, you know that something can't run unless it was signed, and by default, everything is signed.

PRIYA WADHWA: So verifying goes hand-in-hand with signing. Signing something isn't super useful unless somebody on the other end is making sure that it's coming from where they expect it to come from.

CRAIG BOX: It's like doing backups without doing restores. I guess from what I've heard so far that you've put a lot of effort into signing and having this stuff be available. You really do that so that it can all be verified, and that it can be manually verified in some cases. But at the rate of change of software, it almost makes sense just to assume that it will all be automatically verified, and then only alert people in a case where there's a fault.

PRIYA WADHWA: Definitely. And that's where automated tooling for verification also comes in, like the Connaisseur project that I mentioned earlier. But one of the key parts of Sigstore is that it's not only open source, but it's also transparent. All the certificates that are issued by Fulcio, which is the certificate authority in Sigstore, or all of the entries that end up in Sigstore's transparency log, are all available to the public. So yes, it's important for people to sign things and add them to the log and make sure that people know that they've done it. But on the other hand, people also need to be auditing the log and making sure that everything looks good, and if something sketchy happens, that they bring it up and they know that it has happened.

CRAIG BOX: Now, you can judge the validity of a new concept in the cloud native ecosystem by when it gets its first KubeCon pre-day. And at this upcoming KubeCon, we have SupplyChainSecurityCon. What can I expect to learn there?

PRIYA WADHWA: Oh, it's going to be exciting, Craig. I can tell you that much. We're going to be talking about supply chain security, obviously, probably in relation to KubeCon. But I think it's just going to be a lot more about what I've been talking about today, stuff about Sigstore, stuff about Tekton, stuff about some of the big hacks that have happened in the past few months, but just a lot more.

CRAIG BOX: And if you want to learn more about this a bit sooner, Google Cloud's also running a container security webinar next week.

PRIYA WADHWA: Yeah, I'm excited to see it.

CRAIG BOX: I look forward, at least, to the ubiquity of all of this technology. It's great to know that there are people out there who care about this technology, but to me, I think it will be more successful when it blends into the background and I know that I'm safe without having to stress about it myself. So what things do you think need to happen before we get to that world, and in what way can people contribute to the effort?

PRIYA WADHWA: Right now, most of Sigstore tools are not GA, not ready for production. And so a good first step would be to get more contributors on board, continue to add features that people are looking for, and really stabilize each of the tools and services that Sigstore offers so that people can actually start using it and start using it in production.

CRAIG BOX: And if this is something that people want to contribute to, where would you like them to go?

PRIYA WADHWA: Sigstore.dev is a good starting point to learn about each of the tools and services I've mentioned, or visit our Slack channel. I promise we're super friendly and we would love to have you.

CRAIG BOX: All right. Well, thank you very much for joining us today, Priya.

PRIYA WADHWA: Thanks, Craig.

CRAIG BOX: You can find Priya on Twitter, @priyawadhwa16. And I did check, and 0 through 15 were all taken.

PRIYA WADHWA: Yeah, it's a really common name.


CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter, @KubernetesPod, or reach us by email at kubernetespodcast@google.com.

JIMMY MOORE: You can also check out our website at kubernetespodcast.com, where you'll find transcripts and show notes, as well as links to subscribe. Until next time, take care.

CRAIG BOX: See you next week.