#224 April 30, 2024

OpenFeature, with Thomas Poignant and Todd Baert

Hosts: Abdel Sghiouar, Kaslin Fields

Thomas Poignant and Todd Baert are Software engineers with long experience working on IAM systems and feature flagging software. Today they are both maintainers and members of the Technical Committee of OpenFeature which is a CNCF incubated project

Do you have something cool to share? Some questions? Let us know:

News of the week

Thomas Poignant (LinkedIn, Twitter/X)

Todd Baert (LinkedIn, Twitter/X)

ABDEL SGHIOUAR: Hi, and welcome to the "Kubernetes Podcast" from Google. I'm your host, Abdel Sghiouar.

KASLIN FIELDS: And I'm Kaslin Fields.


ABDEL SGHIOUAR: This week, we speak to Thomas Poignant and Todd Baert, who are software engineers with long experience working on IM and feature-flagging systems. Thomas and Todd are maintainers and members of the Technical Committee of OpenFeature, which is a CNCF-incubated project. We talked about feature flagging and the OpenFeature projects.

KASLIN FIELDS: But first, lets get to the news. Microsoft announced the general availability of the Istio service mesh add-on on Azure Kubernetes Services. With this announcement, users of AKS can deploy upstream Istio versions 1.19 and 1.20 into a single cluster. They can also use Azure's upgrade mechanisms to upgrade the Istio control plane and can visualize Istio core metrics directly in the graphical console.

ABDEL SGHIOUAR: The CNCF released their 2023 annual survey. The annual questionnaire surveyed around 3,700 candidates, of which around 1,000 were retained for analysis. Some of the highlights are the growth and adoption of graduated projects like Kubernetes, Prometheus, gRPC, and Helm, with 84% respondents reporting using or evaluating Kubernetes. You can read the findings of the survey and research methodology in the link in the show notes.

KASLIN FIELDS: Women Who Code announced it's closing its doors as of April 18, 2024. The organization started in 2011 in California and moved its headquarters to Atlanta, Georgia, in 2018. Its mission was to empower women in tech via various programs, events, and scholarships. In its announcement, Women Who Code encouraged the community to continue the work of empowering women in technical spaces. The organization cited funding as a main reason for ceasing operations.

ABDEL SGHIOUAR: Microsoft revealed a vulnerability in OpenMetadata version 1.31 or lower. If exploited, these vulnerabilities allow attackers to gain control of Kubernetes clusters and run crypto-mining operations on them. OpenMetadata is an open-source platform used to host and distribute metadata across various data sources, and it acts as a central repository for metadata. Users and platform admins should check and make sure they are running OpenMetadata version 1.31.1 or later.

KASLIN FIELDS: And that's the news.

ABDEL SGHIOUAR: Well, hello, everyone, and welcome to a new episode of the "Kubernetes Podcast." Today, I'm talking to Thomas and Todd. Thomas and Todd are software engineers with long experience working with IAM systems and feature-flagging software. Today, you are both maintainers and members of the Technical Committee of the OpenFeature projects, which is a CNCF-incubated project. Welcome to the show, Thomas and Todd.


TODD BAERT: Glad to be here.

ABDEL SGHIOUAR: Good to see you. So, well, let's start with introducing yourself. Maybe we can start with Thomas.

THOMAS POIGNANT: Yeah, sure. So I'm Thomas. I'm living in Paris, France, and I'm senior software engineering manager in a company called Adevinta. And one of the things I do on my side project is working around the feature flag.

I started with a personal project called GO Feature Flag that is, basically, an open-source feature-flag solution. And later on, I met Todd, actually. And we-- and Todd told me about OpenFeature, and I'm starting to be more involved into it and start to have more standardized way of doing feature flagging. So that's why I'm here now.

ABDEL SGHIOUAR: Nice. And I just realized today that KubeCon, the KubeCrawl part of KubeCon, is actually sponsored by your company.

THOMAS POIGNANT: Yeah, actually. So we're going to be drinking beer paid by your company. What about you, Todd? Can you introduce yourself?

TODD BAERT: Yeah, hi. My name is Todd. I work for a company called Dynatrace. In the past, I've really done a lot of work in IAM and authentication/authorization, but more recently, I've worked on feature flagging at Dynatrace, as well as other companies.

And I moved into the open-source program office at my company, where we focus on a number of open-source projects, one of which is OpenFeature. And like Thomas says, he was already in the feature-flagging kind of space in an open-source way, a project he started himself. So we connected pretty early on, and we've been working on OpenFeature together ever since.

ABDEL SGHIOUAR: Nice. So a little disclaimer-- I am not a programmer. I pretend to be one. But can we start with one of you? Explain to us what feature flagging is. What do we mean when we say "feature flags"?

THOMAS POIGNANT: No, I can go. So yeah, feature flagging is, basically, the way to change how your application behave without pushing any new code in production. So basically, it's a way to activate features or change behavior of something that you have without doing any deployment or anything. It's just like something you do on your flag-management system.

So you have a lot of them, actually, and you do some remote configuration and enabling/disabling stuff and all of this directly from a third-party thing, not directly on your application. So it allows to activate and deactivate feature, most of the time, this is the basic use of it. I don't know if, Todd, you want to add something.

TODD BAERT: I think you got the core of it. I think Pete, who's on our governance committee-- Pete Hodgson-- he's actually blogged a lot about feature flags. I think he has a popular blog on Martin Fowler's blog. And he said it really eloquently.

He said feature flags are basically pivot points in your code, where you can use some external system to take one code branch or one branch of execution or the other to control the user experience, operational toggles, all kinds of different things, even perform experiments. So there's a lot of different ways to use them. But at the end of the day, it's kind of like a pivot point in your code that you can control and modify without actually changing the code itself. So you make some change to some external system, and then a different part of the code executes.

ABDEL SGHIOUAR: And so, I mean, as somebody who have done system administration for a big part of his career, I would assume that flag comes from a flag in the CLI. That's kind of like the same terminology because that's typically when you run a command line, a flag will alter the behavior of the command line.

TODD BAERT: Yeah, I mean, there's definitely a similarity. I would say that a flag in a command line is more of an immutable thing because you run the command, and then something executes that. The execution of that program may be long-lived. It may be not, whereas when we think about feature flags, we're thinking much more dynamically.

So some flags don't change over the course of an application's life cycle. The most basic conception of a flag is probably just a flag that's on or off, and it doesn't change throughout the application's life cycle. The app either launches with the flag in the on state or in the off state.

But one of the ways in which feature flagging becomes really interesting and compelling is flags that are changeable at runtime. So instead of just a startup, you can change them in flight. So after an application is deployed, and in production, and safely operating, maybe you've made some changes that you're not sure about, then you can actually turn something on. And if anything goes wrong, you can revert it without any kind of deployment. So I'd say the dynamic part is a little bit different from a CLI flag.

THOMAS POIGNANT: Got it. So on top of it, one also interesting part of feature flagging is you can target-- we can have the dynamic feature, meaning that you can say, I want only my beta user to see the new-- to have the flag on, or only my internal tester, or whatsoever. So it's really something that, for the same app running, you can have different behavior based on the context of your user. So it's also a really nice addition compared to flags in command line.

ABDEL SGHIOUAR: So then in that case, how is this different from using headers, for example, if it's an HTTP application because people typically use headers to alter the behavior of an application based on beta users, internal testers, not internal testers, et cetera?

TODD BAERT: Well, headers are going to be one data source that you could act on. So to reiterate what Thomas is saying, I guess there's two kinds of dynamism when we're talking about feature flags. We're talking about the flags-- the configuration can be changed dynamically. But also, the targeting of the feature flag could be dynamic, and that's going to be based on contextual data.

So let's take your example of a header. Say there's a header for user agent. That's what the browser, in a lot of cases, is. So maybe it's Firefox or Chrome.

You could write a feature flag that would say, show this feature to all Chrome users, and then it would use that header to show it to all Chrome users. And then, on the fly, you can say, OK, actually, show it to Safari users and Chrome users. And then Safari and Chrome users would see it. So we're dynamically targeting, and we're also dynamically changing the targeting without deploying an application.

ABDEL SGHIOUAR: Without deploying the application, OK. So can you walk us a little bit through how people use to do this before? I mean, feature flagging is not a very old technique, I assume, because I haven't heard about it until a few years ago. So how did people do it before? What were people doing before feature flags was a thing?

THOMAS POIGNANT: When you look at it, feature flagging is something that is pretty simple, and it's basically a configuration that is remotely available and that you can change on the fly. So people were kind of doing it without doing it, meaning that you had, already, configuration files or things like that were people listening to and not really-- it was really unmade.

What changed recently is now it's become more and more professional, and we have a lot more companies doing it. And you have way more advanced way of doing it, meaning that what you want to know that when you change your configuration, that all your system react to it at the same time. And this is exactly what feature flagging offer you now. Let's say you can use the same flag to enable a feature in your back end and in your front end at the exact same time. So a user-- when he hits the page the next time, the back end will change but also the front end.

And this is exactly the kind of things that feature flagging is bringing now. It's just becoming more and more advanced in terms of feature so we can turn on and turn off, but also test in production, this kind of thing. You can say, I apply this flag to only one person. So you can test in production your new release before releasing it to everyone.

And also, what it brings compared to just the configuration side is you can have advanced rollout strategy, saying, like, I want my flag to roll out gradually for all these users in this time frame, for example, or these kind of things. That was not the case in the past. So I'm not saying it's a new concept. Just-- it becomes more and more a part of the journey of any developer because it's super handy. When you start using it, it's hard to get back to normal way of I do something, I deploy it, and that's it.


THOMAS POIGNANT: I think that's the main reason why-- a [? little ?] bit more.


TODD BAERT: Yeah, I totally agree with what Thomas is saying. I remember hearing "feature flag" I think for the first time probably around 9 or 10 years ago. But the conception of feature flag then was like just, basically, controlling the rollout of a new feature in a very binary way, not dynamic. It was controlled by an environment variable. And it was a developer-focused thing entirely. And there was no maturity around it and no practice around it.

So I think what we're trying to do-- what's happening with the industry in general is these things are being kind of standardized, if not officially in all cases, just by means of the fact that people are kind of trying to standardize the platforms they're working on in terms of Kubernetes and stuff like that. So being able to take these practices and making them look unified and applying best practices universally so that developers understand these best practices and can easily leverage them, I think that's what we're seeing happening. And feature flags moving from a very basic kind of stopgap to an actual rigorous practice is just another example of that.

ABDEL SGHIOUAR: Got it. And so you kind of touched on the next question I was going to ask, and this specifically to your thoughts, because I saw on your LinkedIn, you said you are a polyglot developer. So before OpenFeature was a thing, I assume it was probably a pain to manage feature-flag flagging across multiple programming languages, right? So what made it tricky? What I'm trying to get to is, what's the root of creating a standard that is the OpenFeature project?

TODD BAERT: Yeah, well, it is tricky. So if you're trying to do a robust feature-flagging solution, kind of like we already touched on, you want a few things. You want the ability to modify them dynamically and have it impact all the relevant workloads at, basically, the same time. You want some kind of consistent and easy-to-use API for application developers.

And you probably want some kind of control plane, too, from an administrative perspective. And that would include, you know, getting metrics and telemetry about your feature flags so you can make sure they're actually working the way you want to. And especially if you're doing experimentation, you need to be able to see the results and the impact of that experimentation.

So when bigger companies tried to roll out mature solutions to this, if they were rolling it themselves-- which frequently they were, especially up until very recently, pretty recently, when we've seen vendors get into the space-- for every application language they supported, they'd create some kind of SDK, and that SDK would probably talk to some flag-configuration back end somewhere, some REST service. And then they would have to roll that out, implement that several different times in several different languages. And so that's kind of where this challenge comes from is just the difficulty of implementing that multiple times in multiple languages.

ABDEL SGHIOUAR: Got it. So then that brings us to what we are here to talk about, which is OpenFeature, the project that went to incubating stage as part of the CNCF. I saw it on the blog when we were preparing the news for one of our episodes, and I was like, oh, this looks interesting. And that's the reason why we are here today. So what is OpenFeature? Give us the sales pitch, let's say. I know that you folks are not sales, but what's the pitch?

THOMAS POIGNANT: I mean, I can try. Todd correct me if I'm wrong, but the main goal of OpenFeature is to standardize the way you talk with a feature-flag solution, meaning that-- I should mention if you are using a feature-flag solution before OpenFeature, you have a specific SDK for languages and also specific SDK per languages and also specific SDK per vendors, meaning that if you switch from a vendor to another, you will have to reimplement all your feature-flag system because it's not common way of doing it. Everything is super close, but everything has a specific API.

What OpenFeature brings is having a common SDK that is vendor-agnostic, so meaning that you integrate with OpenFeature, you don't have to take care of which vendor you are using. And just a bit like OpenTelemetry does with telemetry, you just plug a provider-- what we call a provider. So it's an implementation for a specific vendor, like a link to this SDK.

So every vendor can have their own provider-- OpenFeature compatible. And they just plug it behind our SDK. So the main goal is to have a common way of doing feature flagging in every languages possible, so every languages we support right now, but especially to be sure that we are not linking to one specific vendor or open-source solution but just being sure that you always deal with feature flag the same way.

TODD BAERT: Yeah, I mean, I couldn't have put it better. That's been a huge goal of the project for, basically, the whole life of it. The other thing is beyond just vendors, we see a lot of larger orgs that want to implement their own solution.

And so OpenFeature becomes a good target for them in terms of-- we have an SDK. You can basically build an adapter layer to talk to whatever source of truth, whatever control plane you want to use for your feature flags. Maybe it is a vendor. Maybe it's some homegrown solution. Maybe it's literally just a file. We have a number of different solutions.

And you can start small. Maybe you don't have any feature flagging right now, but it's something you want to experiment with. You can start with just backing the OpenFeature SDK with environment variables, or a basic file, or a REST service that you write that controls it, maybe just connects to a database. And then, if you really decide that you see the value, you can migrate to a vendor or take the time to implement a real, robust homegrown solution if you feel like it. We also offer some off-the-shelf bits and pieces to help you write your own.

ABDEL SGHIOUAR: Got it. So I was looking at the five-minutes rundown of how to use open feature. And I'm going to tell you what I understood, and then you correct me if I'm wrong. So you have the SDK part, which is what you would implement in your application. And then in your application, you would declare the flags that the application supports, and you would give them-- or you would declare for each flag, what are the possible values and what's the default value?

I assume that's because if the server is not available, then you want the application to behave still or just load. I'm getting a thumbs up-- good. And then you have what you call the provider. Is that the right terminology?


ABDEL SGHIOUAR: And so the provider is actually the server, right?

TODD BAERT: The provider is the adapter layer. So, I mean, you could think of the provider as the server in an abstract sense when you're coding. But really, what the provider is, is it's the thing that helps our API really, which is uniform, a uniform way of evaluating feature flags across languages. It uses consistent semantics and kind of an idiomatic API in every language to communicate with your source of truth for your flags. So that adapter layer that makes sure our API can talk to wherever your flags are stored, that's what we refer to as the provider.

ABDEL SGHIOUAR: And then, what implements the provider could be a vendor. The tool could be an open-source tool, right? That's what would be called a server. But, as you said, it could be a database. It could be a file. It could be whatever.

TODD BAERT: Yeah, exactly. So we have vendors who produce and publish their own providers. We have-- a whole bunch of them contributed an open source. So in the case of a vendor, a provider is basically going to wrap their SDK and allow the OpenFeature API to communicate with their back end.

But then we have providers for just environment variables if you're getting started. And then we have providers that correspond to our reference implementation. Thomas has produced a whole bunch of providers for his open-source solution, the one that predates OpenFeature, GO Feature Flag. So yeah, even people who've been using GO Feature Flag can use the OpenFeature API with that back end.

ABDEL SGHIOUAR: So OpenFeature flag is a server, as well, that can implement the provider?

THOMAS POIGNANT: No, so OpenFeature is really the SDK part and how to communicate with the flag-management system, and you have a bunch of flag-management systems that are compatible with. As pre-OpenFeature, I was developing something called GO Feature Flag. That is one solution that is compatible with it. Part of the project of OpenFeature, you have also flagd. That is an implementation that exists, that has been built at the same time as OpenFeature as a reference base for flag evaluation and using OpenFeature.

But you have a bunch of others, like flagsmith is open source, but also a SaaS solution that is doing it. And you have also big vendors, like LaunchDarkly and these kind of people that are using it. And yeah, we have more and more people interested into the topic. We have folks at Spotify where a lot of them-- I don't know. I cannot give all the name because it's way too much. But yeah, we have a lot of people starting to contribute to OpenFeature and to make it even bigger and having more impact on what we are doing.

ABDEL SGHIOUAR: Yeah, I have a friend working at Spotify as a backend engineer, and he was mentioning using flags for driving experiments, but I have to go back to him and ask him, what are they using? So you talked about a bunch of interesting things. I'm going to put my system engineer hat-- so in the context of what we were talking about, the SDK would typically be the client to a back end and the back end could be flagd, your homegrown back end, whatever. So the first question that comes to mind, what if the back end is not available? How would the SDK behave?

TODD BAERT: One of the big goals of the project was-- again, Thomas mentioned it earlier, but when you survey the landscape of these kind of APIs, SDKs that various vendors offered, they behave in a remarkably consistent way. And I think that's a benefit because it really does create uniformity and what we mean when we're talking about feature flags.

So one of the things that you'll notice, if you start to look into these SDKs and these concepts from various vendors or even homegrown solutions, is that since feature flags are generally kind of an orthogonal concern, like a crosscutting concern in an application, you don't want your application not to work if your feature flags don't work. They're there for experimentation. They're there for rolling out new features, but they're not mission-critical, generally. There's probably some exceptions, but generally, they're not mission-critical.

So basically, the way that our API and most APIs work is you always supply a default value. And if something goes wrong in the course of evaluation, you're going to get that default value worst-case scenario. So in a general sense, as long as you're engineering your code so that if the default value is returned, nothing ever breaks and things generally work-- they may just work in a kind of a more basic state, maybe a new feature isn't seen, something like that-- but as long as that's a situation you're in, you really don't have too much to worry about.

Now, of course, there's ways to get around that. If you feel like a particular flag is mission-critical, there's lots of things you can do. And different vendors and different solutions are going to offer various SLAs, obviously, for availability.


TODD BAERT: So that's kind of a separate question. But, in general, the idea is your application can function in a basic way without feature flags, and our APIs all kind of reflect that as a concept.

ABDEL SGHIOUAR: Yeah, so in other terms, you should treat a feature flag-management system as any dependency on a third-party API. If that API is not available, your application should still behave a certain way.

TODD BAERT: Exactly.

ABDEL SGHIOUAR: So then I have a follow-up question. Does OpenFeature have an opinionated way about how the features are supposed to be supplied in terms of which API? Does it have to be REST? Does it have to be gRPC? Or it doesn't matter?

THOMAS POIGNANT: No. So because providers are made for each solution. And so they decide how to contact the backend system and how they want to do it. So if they want to use REST, it's fine. If they want to use gRPC, it's fine too. And I think we have already supporting both of these technology, for example, in different flag-management system. So this is really not a problem.

Yeah, I think the biggest thing is just coming up with the language-level abstractions and that we need to support a wide variety of backends when we design our API and our abstractions. So, for instance, both the protocols you mentioned, gRPC and REST, those are generally thought of as asynchronous. So yeah, in a lot of our language implementations, we have asynchronous flag evaluation or flag-resolution interfaces so that, yeah, if you need to go call an HTTP endpoint, you can do that.

If you need to make a gRPC call, you can do that. Or you can read from a file. It's really up to you. All you have to do is implement this interface that's going to fulfill the right data contract in terms of evaluating a flag based on the context that's coming in, based on the identifier for the flag, and the default value, that kind of thing.

ABDEL SGHIOUAR: Yeah, that answers my question because one of the things I was thinking about-- again, looking at the quickstart guide example-- I didn't really dig too much-- is my first line of thinking was if, for each execution of a function or a part of the code, I have to go check a remote API to see, what's the default value or what's the behavior right now, so how does that also impact things like latency, for example? Or is it not even a concern? I'm not talking here about OpenFeature in general-- I think, generally speaking, about feature flagging with a remote server.

THOMAS POIGNANT: So what we see is we have two types of flag-evaluation system, depending on vendors. Some of them are doing all-remote plus some caching, but some others are doing local evaluation, meaning that within the SDK, they just get all the flag configuration, and they are able to evaluate the flag value in local inside the SDK. And they just use the remote port to keep in sync with any change of configuration. So that's what you see in most of the advanced ones because it means that you don't have latency in the network part. You just have latency to the evaluation itself, which is, in most cases, super low in terms of latency because you are just doing a local evaluation.

ABDEL SGHIOUAR: Yeah, you're just evaluating a local variable or a local value, essentially. And so probably I've been asking a lot of dumb questions, so excuse me for that. Flagd-- I was looking at the example of flagd. And the example that used Node.js wherever-- OK, Node.js, show "Hello, world," and then now show "Hello, world" with the cowsay library, like as a cow. And essentially, the example with flagd was flagd acts as a remote endpoint that the SDK would go talk to, evaluate the value, true or false, whatever.

I think the question was, if I am operating the flag-management system, how does that typically work? Is it all in memory? Is it like a back end with the database? Is there people doing things like GitOps where flag changing is a great operation instead of just somebody changing a value on the fly? I know it's very generic, like an open question-- a vague question.

TODD BAERT: No, it's a fantastic question. So in the case of flagd, flagd is-- it was mentioned before-- it's our reference implementation for our back end. And it's very cloud-native. And that's probably a fuzzy term to a lot of people. What that means to me is it's, in a lot of ways, built as a kind of distributed component and to be used in a distributed architecture. And it's going to observe Unix principles-- really easy to containerize, that kind of thing. So the idea with flagd is you can source flags from a number of different places.

So right off the bat, probably the most common is from a file. So you can have a file local to flagd that's not necessarily local to your workload, but it could be local to flagd. And it could define-- that could be configmap that Kubernetes mounted as a file or an actual file, whatever. And then flagd is going to use that definition to serve feature flags over gRPC.

But you can also point flagd at HTTP resources, and you can point it at Kubernetes custom resources, for example. So the OpenFeature operator, what that does, it works hand-in-hand with flagd to, basically, store feature-flag configurations as custom resources in Kubernetes. And so if you install the OpenFeature operator, it deploys flagd.

But what it's essentially giving you is you define your feature flags as custom resources in Kubernetes. Those have evaluation rules in them, and they're delivered to the workload by gRPC from flagd basically. And that's how the whole thing works.

But at the end of the day, you can use flagd in any way you want. You can run it as kind of a standalone server outside of Kubernetes and just point it at files pointed at HTTP endpoints, and it's going to treat those as feature-flag sources and then evaluate those as a server for client workloads that are communicating with it over gRPC.

ABDEL SGHIOUAR: Got it. OK, interesting. And so that's flagd. And then another stupid question is, for SaaS solution, how does that typically look like for developers? So you have an endpoint that, of course, can-- the same concept, an endpoint that you evaluate to get your flags. But then, as an admin, you have some sort of back office, backend portal thing. Can you just walk us through, high-level, how that typically looks like?

THOMAS POIGNANT: Yeah, most of the time, it's just like a web UI, where you can just enable flags and change percentage of who should have which variation and this kind of thing. I think one of the goal also from all these vendors and a SaaS solution is also to enable anyone to act on the production and being able to roll out a feature, even if you have no tech knowledge at all.

So easily speaking, your product manager could be the one enabling a new feature in production while-- if you have a feature flag set up. And you can just say, oh, I'm going to add it to myself. I'm going to test in production, look how it looks like, and start doing the rollout himself. So that's really, I think, the end goal of feature flagging is being sure that anyone can roll out the new feature without any tech knowledge at all.

ABDEL SGHIOUAR: Yeah, like self-service thing.

THOMAS POIGNANT: Exactly. And that's where all solutions are going to because this is not only a tech tool. It's even more than that. It's like a new paradigm that you can use in your company and say, this is our new way to work. Developers are launching in production with the flag and after, it's the responsibility of the product manager to send it in production. And it's really changed the way your organization looks like because you don't need a developer to have a new feature.

TODD BAERT: Yeah, and to add on to that, a lot of the most mature feature-flagging deployments or architectures I've seen actually kind of have multiple levels of responsibility and segmentation of responsibility for different flags. So a team might own some of its own operational flags that it implements completely on its own, just to ease development and release.

And then, there might be other flags that are controlled by a company-wide cross-cutting operational team so they can coordinate things and then some other flags that are kind of owned by a product manager or whatever that are more marketing-level, experimentation-level. And all of these are working different parts of the application to control different things, and they may even be coming from separate control planes. So the developers may be defining their feature flags in Kubernetes as custom resources, whereas the PM, who probably doesn't want to be committing to Git, is controlling with some kind of UI or something like that.

ABDEL SGHIOUAR: Got it. As I was reading through the documentation, I felt like this is something that you could even give access to, I don't know, marketing people so they can turn on and off campaigns, or offers, or discounts, or changes, or whatever through the website as they see fit. So it sounds like the standardization that OpenFeature is aiming to do is trying to make everybody's life easier but mostly developers' life easier so you're not just keeping up with people asking you to change things all the time.

THOMAS POIGNANT: And actually, this is one of the big advantage of OpenFeature is most of the vendors, since you implement with one SDK, you have one solution to implement everywhere, and you stick to this solution, while, with OpenFeature, you can have different vendors in behind your OpenFeature implementation.

So exactly what Todd say, if you want flagd for your operational flags, like for more, I don't know, database migration or things like that, you can have flagd there. But you can have another more UI thing for your marketing people. And this is exactly what OpenFeature is offering is you can plug multiple system in the same tooling, let's say, and you will do exactly the same way everywhere in your code, but you have different providers and different backend for this. So this is also one of the big advantage of OpenFeature.

TODD BAERT: Yeah, I would say that you're right that it definitely removes-- primarily, it's the developers that have their burdens kind of eased by OpenFeature. So they're not implementing changes all the time. I've been on teams where there was rapidly changing legal requirements or compliance requirements, and then we had to react. I remember specifically-- both Thomas and I coincidentally have IAM backgrounds. I remember specifically doing a big IAM migration, and there was a whole bunch of GDPR concerns. And even beyond that, there was concerns with specific geos within that were more tightly-regulated than even the general GDPR-compliance concerns.

So what we had to do was maintain this list of clients on geos, and we would be changing that list all the time and knocking entities out of that list as they migrated one at a time. And we had to do that across, like, five microservices at the same time and made sure they all were deployed at the same time. So we wanted to make sure it was kind of atomic, but we had no way of actually doing that.

And if we had had a feature-flagging system at that time that could support it, what we could have done was just maintain this list in one feature-flagging system and targeted these specific customers or targeted everybody but that set of customers. And we would have just been able to modify that list on the fly. All of the microservices would have an up to date. So that's the challenges from a developer perspective.

You can see that was a lot of burden, a lot of operational burden, deployment burden. Just going in there and changing some list in five different microservices is a real pain and then making sure it deploys. And look, we had some other irrelevant change, and now the deployment failed, and we have to roll everything back. You can see what a pain that is. So that's certainly one thing.

But the other thing is for ops personnel, for marketing, this really does open up a world of experimentation, like you were saying. They want to trial a new color, a new banner color, a new pricing scheme, whatever it is. If the developers have just baked in some basic feature flags, then marketing can write their own rules. They can write their own targeting rules. They can write their own dynamic evaluation. And the developers are now removed, so marketing can go crazy within some constraints because the developers are actually the people who put the pivot points in the code. But terms of which code path executes, that's been delegated.

ABDEL SGHIOUAR: Yeah, that's interesting. So I'm going to end on this question. Since you mentioned it, Todd, you both have IAM background. Is there any cross paths between IAM and feature flagging? What's the story there?

THOMAS POIGNANT: No, I don't think there is super cross-pathing, actually. I think it's like just end up like this, but no, I don't think there's-- I think it's really a developer problem. Wherever you come from, it's really something-- when you start looking at it and you start using it, it's like something you don't want to quit.

So I think, actually, that's why I end up using feature flagging and wanting to be part of this community of people doing things around. But I don't think I am like-- let me go in that direction. I don't know for you, Todd.

TODD BAERT: I don't think there's a specific general trend or tie between IAM and access management, identity management, and feature flagging. But I would say there is a definite correlation between those things and specification development. And so a lot of OpenFeature has been specification development.

And I know Thomas maintains a SAML library implementation. If you have dealt with SAML, you are a spec guy. You have to be into it because these things are very specification heavy. And so I think if there's anything beyond coincidence in terms of that we both have some IAM background, it's probably because we're a little bit, like, spec nerds. We like defining behavior.


ABDEL SGHIOUAR: All right, cool. Where can I use that as a way to introduce you for the show? So "spec nerds" is the way to go. All right, cool. Well, thank you very much. This was eye opening. I learned a lot. I had no idea what feature flags are. Now I have a little bit of an idea about what they are, so thanks for your time.

THOMAS POIGNANT: Thanks for having us.

ABDEL SGHIOUAR: Thank you, folks.

TODD BAERT: Yeah, for sure. Thanks for having us.


KASLIN FIELDS: Thank you to Abdel, Thomas, and Todd for that interview. I've heard a lot about feature flagging over the years, and it seems like something that's basic yet revolutionary. I feel like every time I hear it come up at a conference, or in a talk, or something, people are like, you need to be doing this. It makes life better. But also, it's something that's very fundamental in the workflow of things.

ABDEL SGHIOUAR: It's true. I mean, I have to admit that before preparing and doing the interview, I didn't really know much details about what feature flagging is. I was just like you. I was just hearing about it.

KASLIN FIELDS: Yeah, I've never used it much myself.

ABDEL SGHIOUAR: Exactly. And it sounds like this thing that is, maybe not-- "niche" is not the right word but something that developers do all the time. But what prompted us, at least, to consider talking to the OpenFeature people is the fact that there is now a project called OpenFeature, which is trying to unify feature flagging in a way.

KASLIN FIELDS: I think that's very interesting. I think what you're trying to get at there is that feature flagging I mainly hear about in use cases within companies. It's not something that I hear individual developers doing on their own. So it kind of has a similar problem to Kubernetes itself, where, if you aren't doing work in your job that uses it, then you probably haven't used it yourself.

ABDEL SGHIOUAR: It's definitely not something that everybody needs or does. I mean, probably "niche" is the right word. It's probably very niche.

KASLIN FIELDS: In terms of personal projects, at the very least. I'm sure there are companies where it's like, this is the way.

ABDEL SGHIOUAR: Yeah, I mean, I'm sure big companies like Google and Microsoft and AWS, that's just a common practice. You change software so much that you want to be able to alter the behavior on a software on the fly to try out new things. I know a friend of mine who works for-- we don't mention the name-- for a popular streaming company. They do feature flagging a lot because it allows them to drive out experiments from a user perspective so they can send or turn on requests based on users and then measure if those features have changed user behavior in any kind of significant way. But that's not only feature flagging because you need feature flagging for turning on and off things, but also, you need to track the experiment to see if there is impact.

KASLIN FIELDS: It's really about iteration. You have to have a project that you're iterating on and you want to change over time, and thus the feature flags are important. So I guess that's why I find it rare in individuals' personal projects, though. You generally need a project that's going to be kind of long-lived and changing over time in order for it to make sense, I suppose.

ABDEL SGHIOUAR: Definitely. I mean, arguably Kubernetes is--

KASLIN FIELDS: Yeah, for sure. And it's been going for 10 years.

ABDEL SGHIOUAR: So yes, Kubernetes is a good example of a software that uses feature flagging because the API server has a lot of flags.

KASLIN FIELDS: I actually wasn't aware of this. I mean, it makes sense, just logically-- a talk about the feature flags used within Kubernetes specifically on that topic.

ABDEL SGHIOUAR: Yeah, I mean, we should probably get somebody-- I mean, on the API server, at least, if you have deployed Kubernetes yourself-- I mean, again, all the components have a configuration file which have flags, and those flags have values. But the API server is the one that has the biggest number of flags because it's the API server.

KASLIN FIELDS: So when I did a lot of deploying Kubernetes from scratch myself, that was always one of the most frustrating parts for me, honestly, was deciding what flags to set on the API server because there are a whole lot of them. And it's pretty hard to understand what they all do, especially because I was often just like, I just need to get a cluster running, but then I also need to document it, and I probably need to document what they are. And that's a lot of things to document all at once.

ABDEL SGHIOUAR: Yes. And depending on which one do you turn on and off, your API server or your cluster, generally speaking, might take more time to become ready or less time, depending which features you enable.

KASLIN FIELDS: Right. It can have a significant impact on the cluster itself. That's the idea.

ABDEL SGHIOUAR: Exactly. Exactly.

KASLIN FIELDS: I also really liked the parallel that you noticed there between the terminology of feature flagging and flags on the command line. I don't think that I had ever really thought about that, but I think you do have a point that maybe the term came from there or something. I also really liked how Thomas and Todd went over the difference. The thing is a much more dynamic concept than flags on a command line, which are kind of something that you set, and it's part of the command line and that's just kind of how it is.

Feature flags are all about turning things off and on and iterating. But still, the concept of how it works, turning something on, doing something with the code, I feel like makes a lot of sense. I do feel like there's a connection there.

ABDEL SGHIOUAR: Yeah, I think that one of the interesting things is that if you've been a command-line user, you use flags all the time for any command line, or most command lines that you use, to alter the behavior of the command line. But you are more on the using side, which is part of something like flagd, which we talked about, in the episode, does

So flagd is a server that you can configure to serve flags or to serve the values of the flags to a microservice or an application. But then, on the other side, on the application side, it's, how do you implement those flags and how do you change the behavior of the-- turn on and off certain parts of the code based on what the value of the flag is? So it's like two sides of the same kind of components, I think, with the same story, just we've been more exposed to the user side of it, not the developer side of it. At least-- I'm talking about me here.

KASLIN FIELDS: And so then this open-source project that they're working on, OpenFeature, I feel a very common theme in open sources when you want to set a standard that works across environments and is commonly used, and so it sounds like that's kind of what they're going for here is setting a standard. And I wonder if this will make feature flagging easier for folks to pick up.

We were talking about those personal use cases and things. And it's a very useful technology if you have a piece of code that you're going to be iterating on over a significant amount of time. So I wonder if this will make [? her ?] approachable, as well as creating a standard.

ABDEL SGHIOUAR: It's definitely the goal of the project or one of the goals, at least, is creating that standard. And then, we discussed in the episode that you also have the flag providers. So there is the standards, and there is a bunch of libraries using code, but then you can use whatever flag provider you want behind it. I mean, they have their own flag provider, which is part of OpenFeature, but there is a bunch of other ones. So yeah, it's setting a standard but then also being open enough so other people can have other implementations.

KASLIN FIELDS: It sounds a little bit to me like Gateway API because when I think about an open-source standard, honestly, my brain goes immediately to Gateway API. We have defined this standard, and we want everybody to use it. And so individual and cloud vendors have to implement their own version of the Gateway API-- Gateway API.


ABDEL SGHIOUAR: Gateway API, yes Yeah, it's like when I was writing the intro for this episode, like I am Abdel Sghiouar and I'm Abdul Sghiouar, right?

KASLIN FIELDS: Yeah, words.


ABDEL SGHIOUAR: Definitely. So I think that's probably a common-- I mean, not common theme across all the CNCF projects, but a bunch of projects that are in the various stages have that sort of mentality of setting a standard--

KASLIN FIELDS: Structure, right?

ABDEL SGHIOUAR: Yeah, exactly. And actually, this is the reason why the Gateway API is a standalone SIG within the Kubernetes because they're trying to create a standard. They're not under networking or SIG networking, which is kind of funny because then you still have the service of type loadbalancer and cluster IP under a different SIG, not under the Gateway API.

KASLIN FIELDS: Yeah, there's been a lot of discussion about that in open source as a model of creating the standard API, and then we ask all of the vendors that want part of that to just implement their own as a way to help to manage the Kubernetes code base. A big thing that's going on that we've talked about before is the removal of cloud-provider code from upstream Kubernetes. Beginning when cloud providers started to run Kubernetes, they put all of their cloud-specific code into Kubernetes itself.

So if you download it to should just run it on a Raspberry Pi in your house, you still had to run it on a cloud provider, which was a little silly in retrospect. And so they've been working on taking that out. I'm really excited for 1.31 when-- the plan, I think, is to hopefully have the last of that. We'll see how that goes. Hopefully, we'll have something exciting to talk about there. But as a pattern, it's really exciting for open source because you set the standard, the community defines what is needed, and then all of the vendors interact with that in ways that make sense for their respective platforms.

ABDEL SGHIOUAR: Yeah, exactly. Yeah, that's, for sure, one of the goals of OpenFeature as a project.

KASLIN FIELDS: Yeah. And so now they're doing that for feature flags. So if you're excited about feature flags and you use them yourself, you might want to look into what the standard is promoting. And if you have strong opinions about feature flags, maybe you want to add some comments and let them know if they're missing anything or maybe get involved with the project and become a contributor.

ABDEL SGHIOUAR: Yeah, maybe. That's-- yes. So it is a funny story. I interviewed Thomas and Todd, so we, of course, did it virtual. And then I was at KubeCon in Paris. And then-- Thomas is French. So he came-- walked to me, and he obviously know who I am, and I had no idea who was-- thinking that.



ABDEL SGHIOUAR: So he was talking, introducing himself, and then he was like, I am Thomas. And I'm like, yeah? And he was like, OpenFeature I'm like, oh, damn, OK.

KASLIN FIELDS: Oh, that Thomas-- right, yes.

ABDEL SGHIOUAR: That Thomas, Yes, because people look different on camera than they look in person.

KASLIN FIELDS: Very true. And especially at an event, where you're there to talk to people. Someone comes up to you to chat. That's just how it works.

ABDEL SGHIOUAR: Yes, and you meet a lot of people. You're not going to keep track of who you meet every day.

KASLIN FIELDS: Yeah, understandable. Did they do anything in particular KubeCon that you know of? I wasn't paying attention.

ABDEL SGHIOUAR: I don't remember. They might have had-- in the Projects Pavilion, they might have had a stand there because it's an incubated project, technically. So they should.

KASLIN FIELDS: Yeah, we didn't get to do a chatter for our KubeCon episode, but I do hope that-- I mean, we kind of did one at the beginning. I do hope that they continue with the Project Pavilion thing.

ABDEL SGHIOUAR: I think it was cool. I walked up to it, and I chatted with a bunch of people. It was quite interesting to have people who are maintainers and people who are involved in various projects be present.

KASLIN FIELDS: And I look forward to having more of them on the podcast.

ABDEL SGHIOUAR: Yes. We'll be looking forward to go to Utah. We'll see. We might have big plans. I don't want to promise anything, but we might have plans.

KASLIN FIELDS: So thank you, Abdel, for teaching us today about feature flags and OpenFeature. So check out the open-source project if you're interested in using feature flags yourself.


That brings us to the end of another episode. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media @kubernetespod or reach us by email at <kubernetespodcast@google.com>. You can also check our website at kubernetespodcast.com, where you will find transcripts and show notes and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.