#173 March 23, 2022

ThreatMapper, with Sandeep Lahane and Owen Garrett

Hosts: Craig Box

ThreatMapper is an open source tool that hunts for vulnerabilities in your production Kubernetes environment, and ranks them based on their risk of exploit. It is built by Deepfence, who also sell a commercial product based on it called ThreatStryker. Co-founder/CEO Sandeep Lahane and head of products/community Owen Garrett join Craig to discuss how to decide what to open and what to keep closed, and just how deep his fence needs to be.

Do you have something cool to share? Some questions? Let us know:

Chatter of the week

News of the week

CRAIG BOX: Hi, and welcome to the Kubernetes Podcast from Google. I'm your host, Craig Box.

[MUSIC PLAYING]

CRAIG BOX: A lot of people in technology, and Cloud Native specifically, are quite particular about their morning beverages. We talked to Frederic Branczyk recently about his decent coffee machine. Former guest Ahmet Alp Balkan has even gone as far as to plant his own coffee plants in Seattle, with the largest one producing three to five beans per year.

Let us turn though to the other end of the coffee spectrum. As I've been bouncing around my various Airbnbs, there's been a wild difference in the quality of a cup of coffee I've been able to make with only the tools provided.

The nicest Airbnb we stayed at had an actual honest to goodness espresso machine, the $200-ish, sits-on-your-countertop type. That's roughly what I used every day before moving, with a latte as my drink of choice.

The safest thing to do is to give a guest a jar of instant coffee. A couple of places did this. Almost all provided teabags. I can already hear the scorn from the tea drinkers. Some went posh and provided a box of coffee bags, which I'm sure are really bad for the environment. Some provided all-in-one things with powdered milk included which just got left in the box.

Commonly found in the Airbnb kitchen is the cafetiere, also known as a French press, or locally, the lovely sounding "coffee plunger". I had to remind myself how to use one of these, and the output could be loosely described as "coffee-flavored water", as opposed to the "coffee-flavored hot milk" that I'm looking for.

A chance phone call with a friend in the UK identified some other odd things that were sitting on the shelf behind me. There were moka pots, or stovetop espresso makers. Never seen one before. Didn't even know they came apart. The support on the gas hob is too big, so they'll fall through if you breathe too closely to them. But what do you know, they're not half bad.

Now, using my moka pot and relegating the cafetiere to froth up the milk, which I heated in the microwave,I can deliver a cup full of liquid that is almost, but not quite, entirely unlike a latte. I'd call it the mochachino, but that name is already taken.

Let's get to the news.

[MUSIC PLAYING]

CRAIG BOX: The Go programming language has released its largest update since its launch in 2009. Go powers all the parts of the cloud native ecosystem that the hipsters haven't tried to rewrite in Rust, including projects such as Kubernetes, etcd, Prometheus, Istio, and Terraform. Indeed, over 75% of cloud-native projects are written in Go, with 10% of the world's developers using it.

Pulling straight from the low-hanging branches of developer demands, Go 1.18 introduces generic functions and types. Also included is fuzz testing out of the box, further establishing Go as a preferred language for developing secure applications.

Speaking of fuzzing, the etcd project has now integrated continuous fuzzing, with support added to Google's OSS fuzz project.

18 fuzzers were written, which so far have uncovered eight bugs in etcd. The low number is cited as a sign of the maturity of the project. The work was done by ADA Logics and sponsored by the CNCF.

Veritas has made the bold claim that 48% of organizations that have deployed Kubernetes have already experienced a ransomware attack. The data protection vendor describes Kubernetes as an Achilles heel for defense against such attacks, without making it clear if they understand the Greek pun they're making. With a purported 89% of the 1,100 respondents saying that ransomware attacks are an issue for their organizations today, it's a good thing that Veritas are ready with a solution to sell you.

Last week, we mentioned that the NSA and CISA had updated the Kubernetes hardening guide but didn't provide a changelog. The team at ARMO heard the show — or at least I choose to believe — and their CTO, Leonid Sandler, published a changelog in the form of a blog post this week. ARMO's open-source KubeScape tool can be used to test clusters against these guidelines, as well as others.

A new online boot camp from the Linux Foundation and the CNCF will help people new to the space to get up to speed with cloud-native development practices. Composed of five courses and the certified Kubernetes application developer exam, the boot camp is designed to take 10 to 15 hours per week over six months. The package retails for $950. Check the show notes for a discount code.

A new deployment vendor, Plural, has launched with the announcement of a $6 million seed round. Plural is a unified application deployment platform for running open-source software on Kubernetes. It launches with a catalog of over 30 apps, covering data, ML, observability, and more. It itself is open source and promises an eventual paid edition to address the needs of enterprise users. The funding round was led by SignalFire.

For years, Docker Desktop for Mac has been plagued by poor file system performance when synchronizing or copying files from the Mac to the Linux VM. A new version 4.6 uses an experimental virtiofs file system as well as offering other fixes, promising to end the issue. Indeed, one user saw an operation that used to take 33 minutes now complete in a rather more pleasant 42 seconds.

Finally, a follow up from last week’s show. The Ever Forward container ship is still stranded in Chesapeake Bay. We'll keep you updated as the situation progresses.

And that's the news.

[MUSIC PLAYING]

CRAIG BOX: Sandeep Lahane is founder and CEO at Deepfence and created the cloud-native packet filtering technology used by their products. He has had various programming gigs in areas of security, analysis, virtualization, and network. Prior to Deepfence, Sandeep co-founded another security startup that failed to take off.

Owen Garrett is head of products and community at Deepfence. Prior to that, he led products at NGINX and has a history of managing both open source and enterprise, balancing the interest of community and the customer. Welcome to the show, Sandeep.

SANDEEP LAHANE: Nice to be here Craig. Hi, everyone.

CRAIG BOX: And welcome, Owen.

OWEN GARRETT: Hi, Craig. Hello, everyone. Nice to be here.

CRAIG BOX: A startup that failed to take off. I feel that is the story we need to hear first.

SANDEEP LAHANE: Absolutely. That has been a story of learning. And it goes all the way back to 2016. I was just coming out of my gig at FireEye. We did some cool stuff at FireEye. Doing process introspection from inside hypervisor. That was a pretty good bit of technology. And just a year prior to that, Heartbleed had happened. The first really famous vulnerability, a branded vulnerability.

CRAIG BOX: The one they gave a name to.

SANDEEP LAHANE: Yeah, the one that they gave a name to in OpenSSL. A large part of my career prior to FireEye, I had spent working in the Bay Area startup called Parallocity. What we were really doing back then was finding memory buffer overflow issues at runtime, concurrency defects, data races in heavily concurrent programs. So that was my background.

Heartbleed came, and one of the first few things that came to my mind is, we were solving this for five years, six years, we sold the product to Cisco then Palo Alto and then everyone. Why is nobody solving this at scale as a security product? And that was the genesis of my first startup. The idea, or the technology that we were working on, was based on a compiler frontend. Remember AddressSanitizer from Google? Which has become so super mainstream now, based on Clang.

Something of that sort is what we were building back then. AddressSanitizer was in infancy back then, Clang were just beginning to shape up in a good way. We were early in that market, really. But the thing is when you try to solve issues or buffer overflow problems like Heartbleed, how do you do this at scale without requiring customers to recompile, reinterpret their code?

That was the biggest problem. Just then, Intel SGX was being talked about really. And LXC was always there. We did not have containers or Kubernetes in the form orof the shape that we have now. But LXC existed for many years, even prior to that. The genesis of that idea, Craig, was essentially, what if we do two or three things? You have a user space sandbox, something like Chrome sandbox that you have now, which is system call brokering and stuff like that. You have a user space sandbox, which essentially has control over what is getting executed, number one.

And you augment that with hardware primitives, like Intel MPX and Intel SGX to do low-level things. Essentially, Intel MPX for detecting buffer overflows, SGX for storing the original program control flow. So if there is any deviation from control flow, you try to figure it out. So in a way, you're solving all the buffer overflows by trapping memory accesses at runtime.

If there is a deviation in program control flow, you check with the golden copy that you've saved within SGX and say, hey, look, this is looking different. That was the whole idea. We felt that it was a pretty cool technology, I guess. That's the builder's dilemma, really. When we took it to some customers, it looked like, hey, Heartbleed happens once every five years. Of course, now it is happening more often, those kind of issues.

It just was too niche of a solution to that sort of a problem. Looking back, I think Intel MPX is gone now. Even the support has been removed from Linux Kernel. SGX has become a solution for a different set of problems. SGX itself has a bunch of flaws. So I think the whole thing just wasn't ready for the market. It was a cool bit of technology. We could never commercialize it.

CRAIG BOX: It does sound like the sort of thing that would have worked perfectly until the Spectre vulnerability, and then all of a sudden, it might not have worked at all.

SANDEEP LAHANE: Absolutely. The number of flaws that you start seeing in that level would be flaws like Spectre, the ones in SGX itself. It was just too ahead of its time. Looking back, it wasn't particularly bright is how I would look at it.

CRAIG BOX: Well, I think it's fair to say that NGINX is a startup that did take off. Speaking there of things being before their time and needing product market fit, Owen, what did you learn from your time at NGINX?

OWEN GARRETT: It was a fantastic journey. It was really humbling, being at NGINX. Prior to NGINX, I'd worked for an enterprise software company. And every deployment of the software was a team effort. It was a battle with the sales team and the engineering team and the marketing team. We fought for every single deployment. And NGINX was my first proper exposure to helping to manage and run an open-source project.

It was, for its time, perfectly designed. It was a perfect fit for the market. It followed the Unix philosophy to the letter. It was small, it was lightweight, it did one or two things and it did those things absolutely perfectly. With a community behind it, it took on a life of its own. And that's what I loved about open source. That's what I learnt. The energy that you can get once you have a product that has the right product market fit, then there's nothing stopping its success, because people will need it and people will use it.

The challenge then comes of, how do you sustain that product? Even putting all the commercials aside, at NGINX, we had a huge burden of responsibility. We put a huge amount of effort into the quality of the software, which slowed down development because, by the end, half the internet was literally running on NGINX. So we could not afford to get things wrong.

If we put the wrong code in the product, if we made the wrong decisions, we could break parts of the internet. There's a big responsibility, as well, with open source. And what I learned, what I enjoyed most of all was the balance between balancing that, working with the community and also trying to build a commercial enterprise on top that added to the open-source product rather than taking away from it.

We didn't get that balance perfect. We got a lot of things right. It set me up for looking at open-source strategies and what we can do here at Deepfence to build a product that fits for our community, but still allows us to create a successful business that can sustain that product.

CRAIG BOX: Would NGINX like to have had a technology that could hot-patch the program in the case of any vulnerabilities?

OWEN GARRETT: We looked to all sorts of things like that. One of the things that we toyed with but never quite made it was ways of dynamically loading code in and modifying how we handle traffic and how we rejected traffic or allowed it. So there was lots of potential for things like that. But we also have to understand our place within the stack.

We were low down, not at the bottom, but low down. We were core and very, very broad. It wasn't our place to be overburdened with that kind of sophistication. We had to do our simple task and get it just right. And that's what we did. And that's what drove NGINX to where it's got to now.

CRAIG BOX: Deepfence, you describe it as a security observability company. First of all, Sandeep, I'd like to ask you how you met your co-founders and how you decided that this was the problem space that you wanted to tackle.

SANDEEP LAHANE: I was coming out of my earlier gig. Docker, Kubernetes was just getting that mainstream traction back then. I'm talking about early 2018. It was just me back then. I would go to Black Hat and RSA and hear all these great conversations around microservices and containers. It really started with this one idea — what about delivering security as a microservice?

Everything is becoming a microservice. Here is one more container, one more microservice. And I'm using these words interchangeably. But what if I give you one more container that looks after all of your containers? Security delivered as a microservice. That is how it really started. People would say, yeah, that makes sense. That was pretty high level. I mean, we really started with that. And then really started building towards that. One of the global banks was our first design partner. They let me sit in their office and work with their teams to really define what this is going to be.

While the genesis of this was being able to deliver security, scale it just like microservices are being scaled, the core technology that was built up was really around eBPF, which, again, was becoming the mainstream, not as much as it is now in 2022 of course.

The whole idea that we came up with was, what is the core problem in cloud-native environments? What is it that you don't have? You have vulnerability scanners, you have RBAC, you have a lot of features in Kubernetes itself coming in. What is it that you don't have? Almost universally, the feedback that came in is, hey, can I see the interactions between and amongst these services? Can I see the traffic, the east-west traffic? Can I have visibility at the packet level?

And, again, without a proxy. Proxies have their own issues. They're great at some places, but not so at some other places. What we ended up building was essentially an eBPF-based traffic interceptor, which would look at traffic, grab it, sample it, completely out of band, of course, then build out the whole layer 7 out of it.

So you're starting with packets, you're doing TCP reassembly, and then you're really building out HTTP if it's HTTP, DNS if it's DNS, really building the whole thing out. And then you're looking deeper into it. What are you looking for now? Now that you've reconstructed everything that you wanted to look into, what sort of issues do you really want to look deeper into?

And then that's where the next iteration came in, which is about being able to look into traffic with known industry-standard ruleset, looking for anomalies, looking for command and control communication.

The next block essentially was, hey, my traffic is encrypted. Are you going to be able to look into this? That was the next iteration of the product. And then, of course, TLS 1.3, you cannot put a man in the middle, the MITM proxies. The perfect forward secrecy goes first class. We essentially came up with a eBPF-based interception mechanism to even look at traffic, even before it's encrypted. That was the genesis. It started as a single-founder company. Shyam, who was our CTO now, was my colleague at Juniper. We were working in the same team. I convinced him to join as CTO. He's a co-founder. We were six people until about the middle of 2020.

After raising series A, about 30, 32 people here. I haven't met all of them. I have still met only the six people. I haven't met Owen either.

CRAIG BOX: Yes, I was going to say it's a good opportunity I have to introduce you two to each other.

SANDEEP LAHANE: Yeah, thank you. [LAUGHING]

OWEN GARRETT: Hi, Sandeep. [LAUGHING]

CRAIG BOX: Your announcement post for your series A fund talks about discovering what you are and what you are not. You talked a lot there about the idea of what you are, you want to be able to use eBPF to do security vulnerability scanning. Which things did you decide explicitly not to build?

SANDEEP LAHANE: From the core tech that we had built, which was essentially the packet filtering, what happened is cloud security is a very interesting market. If you're purely playing on the right side, the market pushes you to the left to add some scanning so that you become somewhat of a fuller platform. If you're purely on the left side, doing CI/CD-based scanning and those kinds of things, the market wants you to add some of the context from the right side.

Because, honestly, that is the state of the market really, . Despite our beginnings purely on the right side as a packet filtering engine, we expanded the technology to be a fuller platform. But while doing so, there's always the decision as a founder, as a startup that you make, when is this going to be enough? When do you know you've built enough number of features? When do you know that you're solving enough number of use cases? Otherwise, you could just go on adding features.

The adjacency is very lucrative. I mean, there's always something in the neighborhood that you can add really. What we really decided to do is focus on the runtime. That's what we're going to monetize. If you're moving left for vulnerability scanning, for security scanning, or indicators of compromised scanning and things like that, you're moving just left enough to fill the gap, the glaring hole in the DevSecOps process.

So that on the left side, when people are scanning, they have context from the right. And on the right, when you're running your workloads, you have context of all the scans you did on the left side, essentially. One very concrete example here is, especially in case of vulnerabilities, the more you scan, the more vulnerabilities you end up with. It's a Sisyphean problem. You never converge.

CRAIG BOX: If only we stopped looking, there'd be none.

SANDEEP LAHANE: Precisely, really, right! [LAUGHING] So why are we even scanning if it doesn't converge? We decided that, OK, on the left, you scan. On the runtime, you actually use eBPF and some of the underlying technologies to tell you that, hey, that log4j.jar file, there is no process at runtime that is loading that JAR file. If it is not loaded that means it's not exploitable.

Now, we have certainly bridged the left and the right. We have started with a vulnerability and then we have said, it doesn't matter, because it's not loaded, there is no traffic reaching that particular place. So it's just not even going to be exploitable. Suddenly, you've bridged the gap. You've given the tooling on the left the context of the right and vice versa.

We decided that, look, we are adding more features on the left, but we want to open them up. We want to make this a community project. We will keep monetizing the right site, which is the packet filtering and all of that. But the artifacts of that, the deep observability that you build in that has to be for the community. And that's what we are really doing.

CRAIG BOX: We talk here about the idea of the left side being the development and the right side being the deployment and so on. And I'm sorry to any of our listeners who have a right to left language. I'm sure this is all very confusing for you. But do people actually want one solution for this, these vendors who are building this out? Or is it the case that you can sell a best-of-breed product in one of those halves and interoperate with the other players in the other space?

OWEN GARRETT: I'd say that vendors would like to sell you a big product. They use words like holistic to try and suggest that all their little products work together as a big whole. They want to suck you into that, to get you into their platform and into their buying process. The reality is that different people have responsibility at different parts of the lifecycle of an application.

Developers, the DevOps team, and then it goes into production. There's a SecOps team, an AppSec team. And those people with different responsibilities use different tools, naturally. They've got different problems to solve, they've got different ways of working, they have different responsibilities. Just as NGINX follows the ideal Unix model of being a small solution to a very specific problem and doing it very, very well, that's what most smart open-source users want.

They're quite happy to pull together the right small solutions that are perfectly suited for each of the users and their problems. So let me give an example. There is vulnerability scanning, it's a commodity capability now. You can get vulnerability scanning in a dozen different forms, from tools that sit and go through your repositories to sit in your CI pipeline to run in production.

Underneath, it's all essentially the same technology. But you can't use the same vulnerability scanner everywhere.

CRAIG BOX: Why not?

OWEN GARRETT: You can't use it everywhere because it's used for different purposes by different people. So in my CI pipeline, I want something. And what's most important is it runs inside Jenkins, it talks to my ticketing system, it is triggered when a build happens and it blocks or allows the build. You can't use that when you've got a container running in production and you want to know if there is a vulnerability in that container.

The people doing that aren't even developers. They're your OpSec team.

CRAIG BOX: Is that a case of the signatures being different in that we're looking for signatures of source code versus compiled code?

OWEN GARRETT: The signatures are essentially the same. The problem you're trying to solve is the same. But the user works in a different way. And they have different goals. They're not looking for a product that will filter a build and stop a build if it doesn't meet certain criteria, they're looking for a product that will tell them, what's running in production? I don't even know what is there.

First of all, find for me what is on my production system. There's probably 15 different development teams that have pushed things into production, all using slightly different standards and approaches. I don't have confidence, to start off with, in any of them. So help me discover what's in production and build confidence that it is secure, that it meets our standards.

The first thing that Deepfence did, the first a-ha moment for me was the realization that security and vulnerability scanning doesn't finish once you push something into production. Everyone's talking about shift left. They're applying this technology when you build stuff, but they're forgetting that once things go into production, they can change. Even the act of putting it into production can inject a sidecar into a container. Wasn't there during dev.

Think the patching in production. There's stuff in production that never went through your CI process. You don't know if it's secure. The Deepfence technology, the first problem it solves, is it lets you audit what's in production and tells you, are there any problems right now in my production environment? Is there a log4j module that slipped through because some developer made an exception? Or maybe it's in a third-party product that I got from another software vendor; I didn't build it myself.

They all look the same to Deepfence, to ThreatMapper. So with ThreatMapper, we take that vulnerability scanning capability that was originally targeted for development and we play that against production. And we tell you here are the vulnerable components that you've deployed into production, that your developers either let through as exceptions or didn't track, or, most importantly, they weren't known to be vulnerable when they went into production.

It was only after you deployed it that someone disclosed a vulnerability in Apache Struts or log4j.

CRAIG BOX: When you think about the way we used to look at things in antivirus software, there was something that was looking at files on the disk and there was also perhaps something that was higher utilization requirement by looking at processes as they were running. Which of those does the ThreatMapper do?

OWEN GARRETT: That's a really good analogy. ThreatMapper does the first. It looks, if you like, at the current state of what you've deployed and it tells you where are the weak points, it reveals any blind spots that you've got. The other smart thing it does is it then ranks those weak points based on how it thinks they could be exploited.

SANDEEP LAHANE: At runtime.

OWEN GARRETT: Yeah. If we see a vulnerability that is network exploitable and it's running on a workload that is receiving active traffic from the internet, that's a high priority. Compared to a vulnerability that requires local access and is deep inside your application. By ranking that, we take the thousands of vulnerabilities that you'll find and we tell you which pose the greatest risk.

CRAIG BOX: How often does that run? Are you constantly looking at the state of things as they are changing or is this something that you trigger with a cron job as often as you want?

OWEN GARRETT: Generally, you trigger it with a cron job. You'd hit us with the API. You'd run it in the background. We would just continually check to see — are there vulnerabilities? We pull new vulnerability feeds every day. So if something is disclosed, we'll then rerun it against your infrastructure and say, hey, you didn't know it, but now there's a vulnerability being disclosed in a particular software module that you're using.

SANDEEP LAHANE: The vulnerability scans like Owen mentioned, they are on demand. You can schedule them. But the traffic tracing using eBPF — and when you don't have eBPF, for example, AWS Fargate, theywe don't have eBPF as yet — we fall back to things like connection tracking, which Linux has had for years. This low-level tracing and tracking of who's talking to who, the chatter, the process is coming and going, this is continuous.

It's deployed as a daemon set. It's running with your microservices. This tracing is happening in a continuous manner. Vulnerability scans are happening on demand, or as a cron job. And then you overlay these two things to say, you know what? That particular process which has log4j loaded in memory sitting behind two proxies — not directly exposed to the internet, but indirectly it is because I can see the traffic flowing through — that is exploitable. Now, let's go and do something about it. That's what ThreatMapper does.

CRAIG BOX: I'm intrigued at the distinction between using eBPF to solve all of the problems by compiling down an application and effectively running it inside the VM inside the kernel, versus accelerating the passing off of the traffic to an application and user space. This comes up a lot in discussions about service mesh as to whether or not you could have Envoy running one instance or whether you still need to have sidecars for each application.

How much were you able to actually put in the kernel with eBPF, and do you see a world where you can do all of your scanning inside an eBPF system?

SANDEEP LAHANE: Not a lot. eBPF really we use as a way to intercept things, all of the complex things, be it building TCP re-assembly, matching rules on top of that. Everything happens in user space. There's a limit on how much logic we can push in eBPF. Well, of course, now, in the latest versions of eBPF, there is rule chaining and the size of the program that we can use. All of those things have increased. But there is still a limit on how much you can push in eBPF.

Really we look at eBPF as a way to intercept things very efficiently and the lowermost layers of the stack, grab the NF context and then really work off of that.

CRAIG BOX: People normally talk about the height of a fence. Why does it matter to you how deep the fence is?

SANDEEP LAHANE: [LAUGHING] Me and Owen were just thinking about it, is that anything else that could matter. It's about the depth of the fence. That's what we believe here that Deepfence, honestly.

CRAIG BOX: Well, of course. The clue is in the name.

SANDEEP LAHANE: Yeah. [LAUGHING]

OWEN GARRETT: It's about looking deep into your application, not just looking on the surface, what processes are running. Looking deep right down to the level of seeing individual network events, pulling out requests saying this is recon traffic, this is weaponization. Oh! This weaponization corresponds to a vulnerability in my application. With that deep level of insight, then in ThreatStryker, we're able to assemble all of those little traces and signals and tell you the story of what's happening in your application.

We know the map of the application. We get that from ThreatMapper. We know where the weak points are. We know who talks to what. And then we know what's happening at runtime. We get the network traffic. We get the events on the host. A process crashes or a file system modification is made or a new SSH connection happens.

We piece that together with the network data and what we know about the map to give you a really deep picture about the anomalies in your applications and how those correspond to a combination of attack traffic and behavior – the weak points that are currently present in the app. It's only with that deep insight that you can then properly observe what's happening and then secure it.

CRAIG BOX: Once I have that insight, whether I gained it through the open-source ThreatMapper tool or through your ThreatStryker product, what do I do with it? How do I decide which things should wake me up and how much can be fixed by a machine versus being fixed by a human?

OWEN GARRETT: We use a couple of well-regarded industry standard models to help shape this information. There's a model called the MITRE ATT&CK matrix, which is a way of categorizing individual attack signals, from recon through to port knocking through to process signals and the like. That's the base model that we use to understand what we're seeing.

And then there are higher level models. There's one called the Cyber Kill Chain, which is like a movie script.

CRAIG BOX: It does sound like it.

OWEN GARRETT: It is. It tells you the different scenes in the movie and how they all fit together, how the heist against the bank unfolds from the initial probes that the criminals make through to the different actions they take to spread laterally. That's what puts the story onto the events in the ATT&CK matrix.

CRAIG BOX: This movie sounds like it needs Nicolas Cage in it.

SANDEEP LAHANE: [LAUGHING]

OWEN GARRETT: Exactly. A Nicolas Cage or a Brad Pitt or a George Clooney style movie. Good heist movie.

CRAIG BOX: I don't want to say that two of those actors that are perhaps in a different category than the third one. I'm going to let you figure out which one I'm referring to.

[LAUGHTER]

OWEN GARRETT: I'm not judging that. And if you were to put Deepfence in that movie, then what we are, we're the detective sitting in the mayor's office who sees what's going on and he or she correlates the signals and realizes what's happening inside the city before it's obvious to anybody else. Sees all the little signals, understands that bad actors are mounting an attack, and then you ask, what do we do at that point?

So you can either let us, ThreatStryker, deal with it. Once we see a sequence of events pass a certain threshold of risk, we can deploy a firewall. We talk to the local CNI to stop attack traffic from a particular source. Or if we think that a container or a pod has been tainted, we can stop or kill that pod.

We can do that automatically. And then you rely on the scheduler to restart it. Or we can stick our hand up and we can shout out to pretty much an integration of your choice and say, hey, there's something weird happening here. We think you need to look at it. Out of the 2 million signals we saw last week, these are the two signals we think that matter.

CRAIG BOX: Is this the kind of thing that a SecOps team will be staffed to understand in a larger organization?

OWEN GARRETT: I would certainly hope so. Most of the SecOps teams we talk to use the language of the MITRE ATT&CK matrix. They understand the Cyber Kill Chain and other ways tell the story. They spend a lot of time manually going through, correlating logs, trying to sift the signal from the noise.

And what we do with ThreatStryker, we automate a huge amount of that. We allow the SecOps team to focus on the higher value stuff. We do the drudge work, we filter out a lot of the noise to give you the signal. Every security posture is different. We would never assume that we would know universally the right thing to do at every point.

So typically, we'll tell you something weird has happened. You can then look back through our logs, the rolling records we keep. And we'll tell you all the signals that happened against that workload, going back in time up two or three months. You can then piece together and figure out what's going on and decide, do you want us to continue just monitoring? Do you want to step in and do something?

Or do you want to turn on our automated protection because you think we've got a pretty good handle on this particular issue and you're happy for us to handle it from now on?

CRAIG BOX: Some people will release open-source software as a loss leader, in effect, for their commercial application, hoping to get people to build up their support. Some people will open source a part of their commercial application because they know that it's security software and it needs to be audited and accepted by the kind of teams that they're using, and everything has to be open source.

Yet other people say that everything has to be open source because the Kubernetes ecosystem is effectively open source. Where is the distinction between ThreatMapper and your commercial products on that spectrum?

OWEN GARRETT: The distinction's really, really clear. If it's about finding vulnerabilities, understanding what you've deployed, extending your shift left into production, that's ThreatMapper. That's the right way it should be. ThreatMapper is built on public feeds, on open source data, uses additional open-source tooling. If it's about security, then ThreatMapper is there. There's no limits on what you can do with that. There's no limits on what you can scan, where you can deploy it.

It's completely open source to the extent that I honestly couldn't even tell you who's using it, because we don't have a phone home within the product. We deliberately don't have a way of tracking users. If, then, you're satisfied with what ThreatMapper does and the map that it shows you and you want to then understand what are people doing on that map, that's where ThreatStryker comes in. Really clear separation.

If it's building the map, that's open source. If it's tracking where people are moving about on the map — so that's very tailored to your app and your use case — then that's commercial.

CRAIG BOX: You're both the head of product and community, Owen. How do you see the building of a community around that mapping product when it is such an inherent part of the commercial offering?

OWEN GARRETT: The commercial offering depends upon it. But it stands entirely in its own right as an open-source project. And in fact, one of the things that we're considering doing is splitting the two apart. What I'd really like to do is to turn the open-source product into a true platform. It finds information, builds the map, makes that available through a series of APIs and logs, and then we can build an ecosystem on top of that platform.

And it would be an open ecosystem. It would allow us to refactor our observability, runtime tools — to be a client to that platform. But equally, it would allow end users to build integrations with their own SIEM tools and logging tools, or other vendors or open-source providers to build tools that run on top of our platform.

What I'd really like to see is for ThreatMapper to get to that point, where it's an open platform that meets those product goals and is something that the community can gather around and feel that they own.

CRAIG BOX: And Sandeep, are you looking then for technical contribution to the product? Are you hoping to get code contributions from your users, for example?

SANDEEP LAHANE: Absolutely. And in fact, we were pleasantly surprised. It just been I think 96 days as of today, Owen, that we launched ThreatMapper? It is running on Raspberry Pi. People have come in with large patches essentially to fix things. And what was success for a project like this — because you rarely see these kinds of efforts in cybersecurity, you see, on the left side, DevTools for sure — but one of the signals that we were looking for is, of course, the stars and adoption. But we were pleasantly surprised that people are putting this on Raspberry Pi, using it all across. So, absolutely. Looking for technical contributions, feedback ideas, and all of that, really.

CRAIG BOX: What is it that you would like to see people do to get involved in the ThreatMapper community?

OWEN GARRETT: I'd say to the community, please, first of all, try it out. If it works for you, if it solves a problem, if it reveals something that you didn’t already know, then tell people about it. We'd love to work with you if you want to write, you want to speak, you just want to tweet about it. We've got plenty of resources to help people who would like to help us make this open-source project a success.

CRAIG BOX: Alright, thank you very much both for joining us today.

SANDEEP LAHANE: Thank you, Craig.

OWEN GARRETT: Thank you. It's been great chatting with you.

CRAIG BOX: You can find Sandeep on Twitter @Deepfence. You can find Owen on Twitter @OwenGarrett. And you can find links to Deepfence, ThreatMapper, and ThreatStryker in the show notes.

[MUSIC PLAYING]

CRAIG BOX: That's about all the time we have this week. If you've enjoyed the show — and who hasn't — please help us spread the word and tell a friend. If you have any feedback, please send it to us on Twitter @KubernetesPod or reach us by email kubernetespodcast@google.com. You will also find our website at kubernetespodcast.com, and on it, you will find transcripts and show notes. Thanks for listening, and we'll see you next week.

[MUSIC PLAYING]