I was able to grab a few minutes with Brian at the Linux Foundation Member Summit at the beginning of November. We talked about the genesis of OpenSSF, his initial priorities, how to influence behaviors around security, and what sorts of "carrots" might help developers to develop more secure software.
Some links to topics discussed in the podcast:
Listen to the podcast [MP3 - 16:26]
[Transcript in process]
Gordon Haff: Hi, everyone. This is Gordon Haff, technology at get Red Hat. I'm pleased to be here at the Linux Foundation Member Summit in very nice Napa, California, with Brian Behlendorf, who's the newly minted general manager of the Open Source Security Foundation.
Brian, what is the Open Source Security Foundation? What was the impetus behind creating this?
Brian: Over a year ago, actually, just before the pandemic started [laughs] simultaneously at two different firms—at GitHub and at Google—a small group of companies got together with each starting to really think about this problem of application security and dependencies and what happens during build time and distribution.
Like all these blind spots that we have in the open source community around how code is built and deployed and makes its way to the end users. It's funny how both started simultaneously. Then, people realized it'd probably be better to combine forces on something like this. There wasn't any budget, there wasn't really any clear ownership.
The Linux Foundation stepped in, partly at the behest of these companies, and came to be a home for the informal collaboration around ideas around what really could be done here.
Then, that group came up with: Let's focus on developer identity and signatures on releases and things. That became a working group.
Let's look at best practices and education materials: That became the best practices working group.
Six different working groups were formed and a bunch of projects underneath.
Then, some momentum started to build and a recognition that there might be some systematic ways to address these gaps in across that entire lifecycle across, code coming out of a developer's head going into an IDE, them choosing the dependencies to build upon and then all the way to distribution.
There are all these points of intervention, places where there could be improvements made. That became the Open Source Security Foundation. Then, after about a year of this mapping the landscape and figuring out what to do, it was clear that there were some places where some funding could be applied.
In the typical Linux Foundation fashion, we said, "Well, let's see who's interested in solving this problem together." There are a bunch of existing organizations, about 60 or so. Most of those and a whole bunch of new ones came together and agreed to pool some funds.
Which ended up being over $10 million to go and tackle this space. Not with the specific idea of, "We're going to build this product," or "We're going to solve this one thing," but a more general purpose like, "Let's uplift the whole of the open source ecosystem."
"Raising the bar" is one way to think of it, but "Raising the floor" is a phrase that I think I prefer better.
As a momentum got together around a proper funded entity, I had been concerned about this space for a long time, working and leading the Hyperledger initiative as executive director for that, as well as Linux Foundation Public Health, and I said "I'm happy to help on this, and I probably should," so I jumped over to lead this as executive director.
We launched that in mid October, announced the funding, and I have our first governing board meeting tomorrow, Friday.
Gordon: Good luck with that. Not to make this too inside baseball, but Linux Foundation had had their Core Infrastructure Initiative that was kicked off, I guess, by Heartbleed and some of those problems, and it seems the focus has shifted a bit.
Brian: The Heartbleed bug was specifically in OpenSSL. Jim Zemlin, my boss, went around and passed the hat and did raise a healthy chunk of funds to try to expand it beyond the two developers named Steve.. in their spare time, or in their consulting time, I think, to try to be a larger community. I think that had some success. I think, and we've had other initiatives like this CII Badging effort, which is now being rolled into OpenSSF, lots of focus on security in the Linux kernel efforts, so there've been these different security initiatives.
Oh, and a really big one has been SPDX, which started life initially focused on licensing and making sure that this big tarball of code I have and all the dependencies are appropriately licensed and appropriately open source, "I'm following all the rules," and that kind of thing.
Now people are realizing, "Oh, it's easy to extend this to a proper SBOM type of scenario." I can understand if I've got these versions of code, which ones are vulnerable, and really helps with the auditing and understanding not just in this tarball, but in my entire enterprise, "Where might I be vulnerable to outstanding CVEs that have been fixed by updates?" and that kind of thing.
The SPDX effort, that's now an ISO Standard, that's something rolled under this, but it's another Linux Foundation complementary effort. With this, I think we're trying to be more systematic about what are the tooling, what are the specifications, what are the standards?
Also, what's some training we can do? What are ways to help the individual open source projects, even outside the Linux Foundation, have better processes and be better supported in prioritizing security.
Gordon: A lot of what I've heard about security at this event has been around supply chain security. Obviously, security covers a lot of stuff. It at least appears that your real focus initially is on this supply chain security.
Brian: It's funny. I wanted to call it supply software NFTs, but I got shut down on that. Somebody told me earlier this week, we used to call SCM, Software Configuration Management. In fact, it was source code management tools, like GitHub or Git and Subversion. Others have long been about having that picture of where did software come from.
The metaphor of the supply chain, not only our supply chains hot because of the ships sitting off the Port of Long Beach. Also, this recognition that software does have a journey that we are building on top of open source components, so much more than other, than previously.
I remember 25 years ago, when I was first getting involved, you think about what the dependencies were in Apache httpd, it was like glibc. Whatever the operating system provided, it was pretty minimal.
These days, open source packages will have thousands of dependencies. Partly, because developers who push to npm and PyPI in places or do tiny packages around 10 lines of code. You aggregate all these together, and it ends up being much harder to audit, much harder to know if you're using updated versions of than usual.
The framing of a supply chain seemed to better crystallize the fact that that is a whole lot of different parties that touch this stuff. It also helps characterize that this is an issue that industry cares about, that is global in nature, and which governments are starting to care about now too.
I'd say one of the big galvanizers for getting this to be a funded initiative was the White House executive order back in May. Calling for the technology industry, not just open source, but the technology industry to get better about supply chain security as to address the kinds of vulnerabilities exploited in the hack of SolarWinds, and other major famous breaches in the last few years.
Gordon: I like your reaction to how well understood this problem is. I've seen numbers that were all over the place. Linux Foundation has some numbers to indicate that wiith the executive order, maybe things weren't so bad in awareness point of view. However, Red Hat runs this Global Tech Outlook survey, for a few years. We asked about funding priorities for security, third party supply chain basically was the bottom of the barrel at 10 percent.
What's your reaction? What are we seeing here?
Brian: Security is so hard to price. You ask somebody, "Do you want to use secure software?" Nobody says no. What objective metric do we have to know for secure enough? Other than, "Have you been hacked recently?" "Are you aware that you've been hacked recently?"
If your answer is, "I've not been hacked." Your answer is probably...You're not aware that you've been hacked. We really are lousy at coming up with objective ways to say we've hit a score when it comes to the security, the software, or the risk around a breach, or that kind of thing. We do know for sure when we lack a feature that isn't generating revenue for us.
In product roadmaps, whether we're talking about commercial software, even open source software, feature work tends to win out over paying off technical debt when tends to win out over updating dependencies tends to...
It's a shame even though people say they prioritize this, it's hard to do that. One of the things I've been thinking about as I've dived into this is how do we get security to be not like a checklist, not like burdensome bureaucratic kind of thing that developers feel they have to follow, but instead to have a set of carrots that would incentivize devs to add that extra added information, to update their dependencies more often, to make it easier for their own end users to update.
There's some software packages that make updates smooth that respect: Legacy APIs that don't change things a lot. There's others where every minor point release ends up being a rather disruptive update.
One of those things that might be imagined is if cloud providers...Well, first off through the work that communities are doing through SLSA and Sigstore through some of the other specifications work.
We'll get to the point where you'll be able to start to generate relative, perhaps even in some absolute metric, the integrity, the trustworthiness, and the risk profile of one tarball versus another, or one collection of deployed software versus another, one system image versus another.
I think cloud providers might be in a position to start to say, "Hey, we will charge less money to run this image if it has a lower risk associated with it, if there's better attestation around the software, if it's less a collection of one offs done by people in their spare time and more something that's been vetted, something that's been reviewed.
"Something that is pretty well established with minor improvements or something versus this other thing." If we can get incentives by the cloud host to charge less for that, a far off future might even be insurance companies. How do we manage risk out there in the real world?
It tends to be by buying insurance to cover our costs if our car dies on us or we have a health scare or something like that. The pricing of premiums in insurance markets is one way to influence certain behaviors. It's one reason people stopped smoking because their health insurance premiums went up if they kept smoking.
Is there a way to make [laughs] tolerating vulnerabilities in software like smoking where you can do it, but it's going to get expensive for you? Instead, if you just updated that underlying version, your cost would come down. Maybe this is a path to getting that to matter more in people's roadmaps.
Gordon: I guess another way to ask same thing, is this a matter of people need to do this, but it's going to be expensive? They're going to have spent a lot of money to do this. Or is it they need to do it, but it doesn't necessarily need to be that onerous?
Brian: Signing your releases, having a picture of when dependencies that you depend upon are vulnerable. It might be worth updating them. There's a whole batch of activities we can do to make the development tools and the way stuff gets deployed embody these specifications and these principles out of the gate so that stuff automatically happens. The right thing automatically happens.
There's improvements we could make in the standard software dev tools out there. Maybe even in places like GitHub, GitLab, and that kind of thing to make the cost of adopting these things really low for developers. Make it the due default. Make it the norm in the same way that accessing a TLS website today is the norm. It's almost unusual to go to one without a TLS certificate.
You'll get warned away now in current versions of Chrome. We've got to do that at the same time as recreate incentives to do the things where there's unavoidably a cost. When you update an underlying dependency, it's almost never zero cost. What's a reason to do that? There have to be a series of carrots, as you call it, and hopefully very few sticks.
Finally, though, given the government interest in this domain, you will start to see executive orders like we saw even today, I believe it was, or yesterday. There was a White House executive order telling all the federal agencies they got to update the firmware on routers and deal with this specific set of outstanding known vulnerabilities or shut their systems down.
That's ultimately what you have to do when you're running a old code that's unsafe. We might also start to see regulated industries like finance or insurance. The regulators might start to say, "Hey, if you're running code that hasn't been updated in five years, you are a clear and present danger to everyone else in the ecosystem. Shape up or ship out."
It'll be interesting to see if this starts to be embedded in the systems of the world that way.
Gordon: Supply chain is your initial focus here. If you're looking a little further out, what are some of the other problems? Where might be the next two or three problem areas in wall attack?
Brian: I've heard tons of stories about this recently. I think it's a pretty well accepted trope that I don't have like a metrics on this. Application security and secure software development are not really taught in computer science courses whether they were talking about the current education schools or the code academies, code camps, or even other spaces. We don't teach a lot of this stuff.
Of course, we're already inside of OpenSSF, but how do we get that into the standard CS curriculum, all the code academies, and those kinds of things? It's an important thing to do and figure out how to reward and recognize people for accomplishing that.
Gordon: Anything else you'd like to share with our listeners?
Brian: The Open Source Security Foundation is still pretty young. There's still lots of different touchpoints, lots of different things that we're either working on or thinking about working on. We now have some resources to go and apply to different domains.
If you are interested in this domain, if you've got a project you think is worthy of bringing to OpenSSF, or if you care about this for your own open source project, please come to openssf.org. Please come engage in the working group. Working groups are the primary unit of work [laughs] in our community.
We'd love to have you in no matter what level you're at in terms of expertise. We really want to help the entirety of the open source ecosystem uplevel in this space. All are welcome.
No comments:
Post a Comment