In addition to discussing the CII directly, Nicko also talked about encouraging open source developers to think about security as a high priority throughout the development process--as well as the need to cultivate this sort of thinking, and to get buy-in, across the entire community.
Nicko also offered advice about keeping yourself safe as a consumer of open source. His first point was that you need to know what code you have in your product. His second was to get involved with open source projects that are important to your product because "open source projects fail when the community around them fails."
Core Infrastructure Initiative, which includes links to a variety of resources created by the CII
Audio:
Link to MP3 (00:15:01)
Link to OGG (00:15:01)
Transcript:
Gordon
Haff: I'm sitting here with Nicko van Someren, who's the CTO of the
Linux Foundation, and he heads the Core Infrastructure Initiative. Nicko, give
a bit of your background, and explain what the CII is?
Nicko
van Someren: Sure. My background's in security. I've been in the
industry‑side of security for 20 plus years, but I joined the Linux Foundation
a year ago to head up the Core Infrastructure Initiative, which is a program to
try and drive improvement in the security outcomes in open‑source projects. In
particular, in the projects that underpin an awful lot of the Internet and the
businesses that we run on it. The infrastructural components, those bits of
open source that we all depend on, even if we don't see them on a day‑to‑day
basis.
Gordon:
Around the time that you came in, you've been in the job, what, a little
over a year, is that right? There were some pretty high visibility issues with
some of that infrastructure.
Nicko:
Yeah, and I think it goes back a couple of years further. Around three
years ago, the Core Infrastructure Initiative ‑‑ we call it the CII ‑‑ was set
up, largely in the wake of the Heartbleed bug, which impacted nearly 70 percent
of the web servers on the planet.
We
saw a vulnerability in a major open‑source project, which had very profound
impact on people across the board, whether they were in the open‑source
community, or whether they were running commercial systems, or whether they were
building products on top of open source. All of these people were impacted by
this very significant bug.
While
the community moved swiftly to fix the bug and get the patch out there, it
became very apparent that as the world becomes more dependent on open‑source
software, it becomes more and more critical that those who are dependent on it
support the development of those projects and support improving the security
outcomes of those projects.
Gordon:
Many of the projects that we're talking about there, was a tragedy of the
commons sort of situation, where you had a few volunteers ‑‑ not being paid by
anyone, asking for donations on their PayPal accounts-- who, in many cases,
were responsible for these very critical systems.
Nicko:
Absolutely. Probably trillions of dollars of business were being done in
2014 on Open SSL, and yet in 2013, they received 3,000 bucks worth of donations
from industry to support the development of the project. This is quite common
for the projects that are under the hood, not the glossy projects that
everybody sees.
The
flagship projects get a lot of traction with a big community around them, but
there's all of this plumbing underneath that is often maintained by very small
communities ‑‑ often one or two people ‑‑ without the financial support that
comes with having big businesses putting big weight behind them.
Gordon:
What exactly does the CII do? You don't really code, as I understand it.
Nicko:
Well, I code in my spare time, but the CII doesn't develop code itself,
for the most part. What we do is, we work to identify at‑risk projects that are
high‑impact but low‑engagement.
We
try to support those projects with things like doing security audits where
appropriate, by occasionally putting engineers directly on coding, often putting
resources in architecture and security process to try to help them help
themselves by giving them the tools they need to improve security outcomes.
We're
funding the development of new security testing tools. We're providing tools to
help projects assess themselves against well‑understood security practices
that'll help give better outcomes. Then, when they don't meet all the criteria,
help them achieve those criteria so that they can get better security outcomes.
Gordon:
In terms of the projects under the CII, how do you think about that?
What's the criteria?
Nicko:
We try to take a fairly holistic approach. Sometimes we're investing
directly in pieces of infrastructure that we all rely on, things like OpenSSL,
Bouncy Castle, GnuPG, or OpenSSH, other security‑centric projects.
But
also things like last year, we were funding a couple of initiatives in network
time, those components that we're all working with, but we don't necessarily
see at the top layer. We're also funding tooling and task framework, so we have
been putting money into a project called Frama‑C, which is a framework for C
testing.
We've
been funding The Fuzzing Project, which is an initiative to do fuzz testing on
open‑source projects and find vulnerabilities and report them and get them
fixed.
We've
been working with the Reproducible Build project to get binary reproducibility
of build processes, so the people can be sure that when they download a binary,
they know that it matches what would have been built if they downloaded the
source.
We're
also funding some more educational programs, for instance, the Badging Program
allows people to assess themselves against a set of practices which are known
good security practices, and they get a little badge for their GitHub project
or for their website if they meet those criteria.
We
have a Census Project, where we've been pooling different sets of data about
the engagement in projects and the level of bug reporting and the quickness of
turn‑around of bug fixes, and the impact of those projects in terms of who's
dependent on it, and try to synthesize some information about how much risk
there is.
Then,
publish those risk scores and encourage fixes. We're trying to take a mixture
of some fairly tactical approaches, but also have investment in some strategic
approaches, which are going to lead to all open‑source projects getting better
security outcomes in the long run.
Gordon:
How do you split those? Certainly, some of the projects, particularly
early on, it was very tactical, "There's frankly a house fire going on
here, and it needs to be put out."
Then,
some of the things that you're doing in terms of the assessment checklists and
things like that, that feels much more strategic and forward‑looking. How do
you balance those two, or if you could put a percentage, even, "Oh, I
spend 30 percent of my time doing this?"
Nicko:
That's, of course, the perennial question. We have finite resources and
huge need for this. Resource allocation is what I ask input from my board
members for how they think. We, historically, have had a fairly even split
between the tactical and the strategic.
Going
forwards, we're trying to move to probably put more into the strategic stuff,
because we feel like we can get better leverage, more magnification of the
effect, if we put money into a tool and the capabilities to use that tool. I
think one of the things we're looking at for 2017 is work to improve the
usability of a lot of security tools.
There's
no shortage of great tools for doing static analysis or fuzz testing, but there
is often a difficulty in making it easy for you to integrate those into a
continuous test process for an open‑source project. Trying to build things to
make it easier to deploy the existing open‑source tools is an area in the
strategic spin that we want to put a lot into in 2017.
Gordon:
As we also look forward at some of the areas that are developing in this
point, Automotive Grade Linux, for example, AirNav's things, there's new
vectors of threats coming in, and areas of infrastructure that maybe
historically weren't that important from a security perspective are becoming
much more so. What's on your radar in that regard?
Nicko:
I think, obviously, one of the biggest issues that we're facing going
forwards is with Internet of Things. I think we have been seeing a lot of
people forgetting all the things that we've learned in desktop and server
security over the years, as they rush into getting things out there, Internet‑connected.
Often,
it's easy to have a good idea about Internet‑connecting something and building
a service around it. It's less easy to think about the security implications of
doing that in a hasty manner.
We've
been talking with a number of players in this space about, "How do we
adapt some of the programs we've already built for improving the security
process in open‑source projects to apply those to the development of IoT
devices?" I think that we can do quite a lot in that space, just with the
tools we've already got, tuning them to the appropriate community.
Gordon:
Anything else that you'd like to talk about?
Nicko:
One of the biggest issues that we face is improving the security outcomes
in open source is to encourage open‑source developers to think about security
as a high priority, as high a priority as performance or scalability or
usability.
We've
got to put security up there as one of the top priority list items. We also
have to make sure that, because most open‑source projects get developed in a
very collaborative way with a community around them, that you get buy‑in to
that taking it as a priority across the whole community.
That's
the best first step to getting good security outcomes, is to have people think
about security early, have them think about it often, and have them keep it as
a top‑of‑mind priority as they go through the development process. If they do
that, then you can get very good security outcomes just by using the same
practices we use everywhere else in software engineering.
Gordon:
In one of the areas I work around DevOps and continuous integration and
application platforms, like one of the terms that's starting to go off currency
is a DevSecOps term, and the push‑back of that is, "Oh, we know security
needs to be in DevOps." Well, if you know it, it doesn't happen a lot of
the time.
Nicko:
I think that's true. I think it's a question of making sure that you have
it as a priority. At my last company, I was actively involved in doing high‑security
software, but we were using an agile development process.
We
managed to square those two by making sure the security was there in the
documentation as the definition of done. You couldn't get through the iterative
process without making sure that you were keeping the threat models up to date
and going through the security reviews.
Code
review ought to involve security review as well as just making sure that the
tabs are replaced by four spaces. We need to integrate security into the whole
process of being a community of developers.
Gordon:
One other final area, and it's probably less under the purview of something
like the CII, but as we've been much talking about in this conference, open
source has become pervasive, and that's obviously a great thing.
It
also means that people are in the position of grabbing a lot of code ‑‑
perfectly legally ‑‑ from all kinds of different repositories and sticking it
into their own code, and it may not be the latest version, it may have
vulnerabilities.
Nicko:
Absolutely, and I think, key to keeping yourself safe as a consumer of
open source...
Well,
there are probably two things there. One is you need to know what you've got in
your products, whether you built them yourself or whether you brought them in,
there's going to be open source in there.
You
need to know what packages are in there, you need to know what versions of
packages are in there. You need to know how those are going to get updated as
the original projects get updated. That whole dependency tracking needs to be
something that you think about as part of your security operations process.
The
other bit is, get involved. Open‑source projects fail when the community around
them fails. If you want a good security outcome from the open‑source projects
that you use, get involved. Don't just complain that it doesn't work, come up
with a good diagnose bug report and file it.
Maybe
produce a patch, and even if you don't produce the patch that gets accepted,
you've given them the idea for how to fix it, and they'll go and recode it in
their own style. If you're going to be dependent on the security of this
project, put an engineer on it.
Get
involved in these projects, and that's the way to make sure that you get really
good security outcomes, is for people who care about the security of these
products to get involved.
Gordon:
Well, I think that's as good a finish as any! Thank you.
No comments:
Post a Comment