This episode is part of Innovate @Open, a new podcast that focuses on open source with a particular focus on how collaboration and openness are leading to new inventions and innovations.
Show notes:
- The Enarx project
- New Cross-Industry Effort to Advance Computational Trust and Security for Next-Generation Cloud and Edge Computing (Linux Foundation)
- Enarx for Everyone (a quest) by Mike Bursell
- Trust, Enarx and TEEs, and the nature of open source security [15:51 MP3]
Transcript:
Gordon
Haff: You're listening to "Innovate @Open." Stories from
the cutting edge of technology innovation rooted in open‑source software and
collaborative processes. I'm your host, Gordon Haff.
[music]
Gordon:
What I have today is a podcast I recorded last week at devconf.us with Mike
Bursell and Nathaniel McCallum. In that podcast we talked about the nature of
trust and specifically about a new project called Enarx, which is an
application deployment system that lets applications run within trusted
execution environments.
This
was particularly timely because today, August 21st, the Linux Foundation
announced the intent to form the confidential computing consortium.
The
basic idea here, is that as companies move their workloads to a bunch of
different environments, hybrid computing environments, they need protection
controls for sensitive IP and workload data and they're increasingly seeking
greater assurances and more transparency of those controls.
The
challenge is that current approaches in cloud computing address data at rest
and in transit. Encrypting data in use is considered the third and possibly the
most challenging step to providing a fully encrypted life cycle for sensitive
data. Let's kick things off by having Mike tell us about trust.
Mike
Bursell: When you run any process, or you run any application, any
program on a computer, it's an exercise in trust. You are trusting that all the
layers below what you've written, assuming you've written it right in the first
place, are things you can trust to do what they say they're going to do on the
card.
I've
got to trust my middleware, I've got to trust the firmware, I've got to trust
the BIOS, I've got to trust the CPU or the hardware. The OS, the hypervisor,
the kernel, all the different pieces of the software stack. I've got to trust them
to do things like, not steal my data, not change my data. Not divert my data to
somebody who shouldn't be seeing it. So that's a lot of pieces.
If
you're looking at a standard stack of 10, 12 pieces, just think about all the
different libraries you're using, all the different parts of the kernel, all
those different bits. How can you trust that? That's a real difficulty.
It's
the reason that people don't run sensitive workloads or keep really sensitive
data on the public cloud, generally. Do you want to put your really sensitive
data, your research, algorithms, programs on a public cloud service provider
where they could look at it?
Or
even, if you've got sysadmins, how much do you trust all of your sysadmins?
Because a sysadmin, if they have root, could look at anything on any of those
systems, even your internal systems.
Do
you want your CEO’s payroll data to be in there? What about legal data about
companies you are acquiring?
All
of these things are difficult, and they are something that concerns a lot of
people in the enterprise, in government, throughout the world.
We
wanted to look at this. It just turns out there's some new set of technologies
coming out right now called trusted execution environments. They are CPU and
chipset technologies, which allow you to run programs, applications, in such a
way that even the hypervisor, even root, even the kernel can't look into what
you're doing.
That's
great. Fantastic. They are publishing information from AMD, from Intel, from
IBM, all the stuff coming out. But they're all different. They all handle the
problem in a different way.
We,
at Red Hat, started thinking about this and came up with some ideas. We decided
that we wanted to make it easier for you to use these things.
Gordon:
That sounds really interesting. We are going into a little bit more about
what this means, about what we have to trust, what we don't need to trust any
longer. Nathaniel, could you walk our listeners through in a little more in
detail, how this whole thing works?
Nathaniel
McCallum: One of the things that we are concerned about is that a lot of
our existing technologies require you, essentially, to write your application
to the technology. It should be no surprise to the listeners here that Red Hat
is very much against lock-in.
We
want it to be possible for you to write your applications using the standard
APIs that you already use, in the languages you already use, with the
frameworks that you already use, and to be able to deploy these applications
inside any hardware technology possible.
This
is the goal of the Enarx project. One of the things we realized early on was
that there's a new technology called WebAssembly which is being used in
browsers all around the world. Literally, every single browser supports
WebAssembly. It's being looked to very much as a sort of future to JavaScript.
The
thing that's really interesting about WebAssembly to us, is that the
capabilities that WebAssembly can deliver in conjunction with the WebAssembly
system API. It is almost exactly the same set of functions that you can
actually do inside these hardware environments.
It
also means that you get to write an application in your own language with your
own tooling. You can compile it to WebAssembly. Then Enarx will aid you in
securely delivering that all the way into a cloud provider, and to be able to
execute that remotely. The way that we do this, is we take your application as
inputs, we perform an attestation process with the remote hardware.
We
validate that the remote hardware is in fact, the hardware that it claims to
be, using cryptographic techniques. The end result of that is not only an
increased level of trust in the hardware that we're speaking to. It's also a
session key, which we can then use to deliver encrypted code and data into this
environment that we have just asked for cryptographic attestation on.
The
end result is that you get to write your own application the way you want to
write it. To get to deploy it in Enarx where you see fit, and you don't have to
make your application depend upon specific hardware technologies.
Gordon:
In the show notes, I'm going to link to some information about this
project. Could you take us through fairly a fairly high level how this works?
Nathaniel:
Basically, the way it works is that, once we've completed our
attestation, we now have a session key that proves cryptographically that we
are talking to our remote party. We can do so in a way that is encrypted using
all of our standard cryptographic technologies.
We
deliver the WebAssembly code that you have produced as part of your
application, directly to our secure execution environment on the remote host.
At that point, it is then just in time compiled for the actual CPU that you are
going to run on. Then everything will be executing in that environment on the
native processor.
We're
also going to take care to enforce additional security measures. For example, if
you persist any data, you'll be able to at some point, we don't currently
implement this, but at some point will be available to read and write to a file
system. The host will only see encrypted block devices.
The
same thing is going to happen for networking. We're not going to allow
unencrypted networking, but we will allow you to do TLS, for example, to
communicate out. The end result is you just get to write your application. Then
when you deploy it in this way, you have very strong assurances that there's a
whole class of attacks against your application, that won't be able to get off
the ground.
Mike:
What we're doing is we're basically...Remember, I talked to the beginning
about how you've all got all of these layers you need to trust. We're removing
the need for you to trust most of those layers, because the only things you
need to trust are the chip vendor, and the firmware they provided. Which is all
cryptographically signed, you can check that.
The
Enarx code and the application you've written yourself and of course the Enarx code
is going to be open. It's one of the kind of weird things about security. In
order to be really confidential and to be closed to everyone else, you need to
do your actual implementation and your coding and your design in the open. It's
generally accepted these days that open source provides for better security
overall.
We
at Red Hat, of course, want everything to be open source. Enarx is completely
open, will always be open. We're using open source technologies all over the
place. We're using Rust as the main language. It's very well regarded for
security, and for knowledge of what happens when things go wrong.
If
you have faults in in your application, you know what's going to happen. It's
not just going to start spilling digital as a place that could be used by a
malicious host to work out what you're doing, for instance. We're developing in
the open, so that you can do stuff in a closed way, with your sense of data,
which you should control data and algorithms.
Gordon:
Take one of the things that we've seen over the last number of years and
you being in security, Michael and Nathaniel, is that. I think so many people
came to open source or looked at open source from the perspective of, "Oh,
you never published the schematics for your alarm system if you're a
bank."
No
matter how good you think your security is, bring over that analog into the
open‑source world and marry into the cryptography world, of course, that
doesn't really apply.
Mike:
It's difficult because people assume that… That will be good analogy,
that you've got your schematic of your bank vault, and a key. Cryptography is
very different from that. You absolutely should never be using a cryptographic
algorithm which is not known, and open, and peer‑reviewed. A really good
cryptographic algorithm, the only thing that needs to be secret is the keys
you're using.
They
say that any fool can create a cryptographic algorithm that they can't break.
I've certainly created cryptographic algorithms that I couldn't break and other
people showed me how I'd gone wrong. It's one of these little things you learn
to do as your apprenticeship in moving into security is doing this, so you
understand how it goes wrong.
Cryptography
is very much not like that. You need peer review, you need academics. Security
is different, in some ways, to other parts of open source, in that there's this
well‑known dictum that with enough eyes, all bugs are shallow, which is a great
dictum of course.
But
it's not quite that easy for security, because the number of people who have
expertise in security, it's small. You need to ensure that their eyes are being
applied. It's not good enough to have lots of un‑expert eyes looking at
security. You need expert eyes, looking at security.
That's
one of the reasons that companies like Red Hat ‑‑ but there are many others,
Microsoft these days, Intel, IBM ‑‑ are spending a lot of time getting their
security experts looking at cryptography and open‑source cryptography. Because
it benefits the entire community and the whole ecosystem. What I call the
commonwealth of what we are as an open‑source.
Nathaniel:
Just to add to what Mike was saying, this notion that with enough eyes,
all bugs are shallow, this is predicated on the ratio of eyes to the amount of
code.
When
you are dealing with secure code, one of the things that you want to ensure is
that because there are a limited number of eyes, we also need to try to limit
the amount of code as much as possible.
This
is why a project like an Enarx is all about reducing the trusted computing
base. We want there to be a lot less code that you have to trust, which means
that we need less eyes to review it to make sure it's secure.
Gordon:
We've certainly seen recently low levels of hardware in the light, that
you get into these very complex pieces of engineering and it gets harder and
harder to predict or to figure out every possible security exploit.
Let’s
go maybe a little bit far afield as we wind this down.
Coming
back to trust, what are some of the other areas in this low level, the software
stack, in terms of trusted execution environments, in terms of firmware, in
terms, perhaps, of CPUs themselves, where work is being done, or you think that
there are possibilities to increase the security?
Mike:
Let me start with one, which is TPMs. TPMs have been around for quite a
long time and people have not been using them. Partly because within the open‑source
world, there was a great concern, 10, 15 years ago now, I guess in the early
2000s, that they were going to be used for DRM.
DRM
has long been anathema to much of the open‑source community. They never got
taken up usually within Linux and the open‑source community. There's a new
version of TPM 2.0, which is much improved, and people are beginning to realize
there's great benefit in using them.
The
thing about a TPM is it's a hardware root of trust. It's really good for that
if you need to be building up levels of trust because you can't do everything
in Enarx, yet. There are times you need to build up trust, and it's a very good
building block for those sorts of things. That's one example. Nathaniel, have
you got some others?
Nathaniel:
There's been a variety of technologies, even besides the TPM.
Unfortunately, none of them have really gone very well. Most of them have been
hard to use, they've been hard to enable on the system, and they've been driven
by a lot of concerns like DRM. Concerns that don't put the user first. This is
why one of the key principles of the Enarx project is to make sure that we
always put the user first.
Mike:
For years now, we've understood about encrypting data at rest when it's
stored, encrypting data in transport when it's going over the network. We're
now moving into a world where we need to encrypt data and algorithms in
process.
That's
what TEEs are for and that's what Enarx aims to make it easy for you to do as a
developer.
Gordon: Thank you for listening to this episode
of Innovate @Open. For future episodes, subscribe to Innovate @Open on your
favorite podcast app.
No comments:
Post a Comment