Monday, December 20, 2021

RISC-V with CTO Mark Himelstein

RISC-V is an open instruction set architecture that's growing rapidly in popularity. (An estimated two billion RISC-V cores have shipped for profit to date.) In this podcast, I sat down with Mark Himelstein, the CTO of RISC-V International, to talk about all things RISC-V including its adoption, how it's different from past open hardware projects, how to think about extensibility and compatibility, and what comes next.

Listen to the podcast [MP3 - 22:54]


Gordon Haff: I'm very pleased to have with me today Mark Himelstein, who's the CTO of RISC-V, who just got off having a summit in San Francisco that I was pleased to be able to attend in person.

Welcome Mark. Maybe you could just introduce yourself and maybe give a brief overview of what RISC-V is.

Mark Himelstein: I'm Mark Himelstein. I'm the CTO. I've been in the industry for a bit. I was an early employee of MIPS, I ran Solaris for Sun. I've done a lot of high-tech stuff, and I've been with RISC-V for about a year and a half. Very excited. This was an incredible year for us, a very big change for us.

First of all, we believe that there's been well over 2 billion RISC-V cores deployed for profit this year which is an important thing. Success begets success and adoption begets adoption.

A lot of people joined us early on and they're early adopters, and now, you're seeing people say, "Oh, they're successful now. I can be successful."

RISC-V is an instruction set architecture kind of halfway between a standard and open source Linux, kind of right in between there. We don't do implementations. We're totally implementation-independent. We work with other sister organizations that are nonprofit like lowRISC, and CHIPS Alliance, and Open Hardware who do specific things in hardware with RISC-V.

We just really work on the ISA -- the instruction set architecture -- and we work on fostering the software ecosystem. All compilers, runtimes, operating systems, hypervisors, boot loaders, etc., all the things that are necessary for members to be successful with RISC-V.

It's a community of probably about 300 institutions and corporations. There's probably over 2,000 individual members, somewhere around 55 groups, doing technical work, about 300 active members in those groups and about 50 leaders.

They just did an incredible job this year ratifying 16 specifications. In 2020, we did one, so a very big growth for us. A lot of things that have been hanging out there for some time, four to six years, things like Vector, Scalar Crypto, very innovative things as well as some some basic stuff like hypervisor and bit manipulation.

We finally got the standard out, so everybody's grateful for that.

Gordon: I want to talk about standards a little bit more in a moment. You mentioned this open ISA. What was the thinking behind taking this approach? Because obviously, there have been earlier, open hardware or semi-open hardware types of projects, which haven't necessarily had a big impact, or at least not as big an impact as maybe some people had hoped they would have at the time.

How is RISC-V different?

Mark: Yeah, it's a really good question. One of the problems when you hand something whole cloth as open source, is it's hard for people to really feel ownership around it. The one thing that Linux did was everybody felt a pride of ownership. That was really hard to do.

We are the biggest open source ISA that was born in open source. Unlike the other ones, we were actually born in open source. People are afraid that if one of these big corporations goes away, that's behind them, then the open source will go away, the actual standard will go away. Rightfully so, we've seen that occur before in the past.

RISC-V comes along, and it's different. Krste Asanović at Berkeley wanted to do some stuff. The story was, he was wanting to do some vector work and Dave Patterson had done RISC I, II, III, IV. They came up with this RISC-V, and with a V

V doubles as RISC-V and vector, and start off doing this. All of a sudden, there's this groundswell of people who are interested in it. It got so exciting for folks that in 2015, they started plotting how to make it an open source organization, and they did in 2016. It's just taken off from there. People have been dying for this.

It's very clear. There's flexibility with respect to pricing. It's free. More importantly, it's also flexibility with respect to customize. You can do anything you want with it, nobody's standing over your shoulder.

We provide places for people to do custom opcodes and codings and stuff like that. It's set up for extensibility. We believe that it will last for a long time because you can extend it over and over and over again, as we did this year, we added vector, we added these other things.

It's extensible. It's free. It's flexible to use any way you want to. We've also had a renaissance in EDA over the last 15 years.

It's a lot easier to pump down a bit of logic to go off and do, hey, some security module using a RISC-V core, where it may have been harder to do that around the year 2000. That's gotten easier. This combination of things has been incredible.

You see adoption and you see deployment of products more in the IoT embedded space because the runway is shorter. It's not a general-purpose computer. You're running one application, you get it working.

Wearables and industrial controllers and disk drives and accelerators that go into data center service for AI and ML graphics. All those things, you're seeing them first. Then, the general-purpose computers come out a little bit later.

Accepting there's always exceptions, Alibaba announced at the summit last year that they have a cloud server based on they have their next-generation coming out.

You see RISC-V in every single part of computer science, from embedded to IoT to Edge to desktop to data center to HPC. I even have a soldering iron made by that has a RISC-V processor.

Gordon: To this point about extensibility, there was a fair bit of discussion at the RISC-V Summit over, centrally, fragmentation versus diversity. This idea that you have all these extensions out there, but if people use them willy-nilly, then you're breaking compatibility.

I know there are some specific things like profiles and platforms that are intended to address that potential issue to some degree. Could you discuss this whole thing?

Mark: Yeah. I have a bumper sticker statement that says, "Innovate. Don't duplicate." That's the only thing that keeps us together as a community. Why do you want to go ahead and implement addition and subtraction for the thousandth time? Why do you want to implement the optimizers for addition and subtraction the thousandth time? You don't.

The reason why so many people are coming to the table as part of the community with a contributor culture that was built by Linux.

Why are they showing up? Why are they doing work? They're doing it because they realize they don't want to do it all. It's too expensive to do it all. There are many, either countries or companies or whatever, that were doing unique processors themselves because the licenses or the flexibility were available in other architectures.

They don't want to do their own stuff. The same reason why people didn't want to get hooked into Solaris or AIX. All those things that are going to Linux have gone to Linux.

Is the same reason why the coders in RISC-V, they don't want to be beholden to our company, they want the flexibility and the freedom to prosecute their business the best way that they see fit, and we allow them to do that.

Now. they want to share, how are we going to have them share? We have the same thing that shows up with something like Linux, in that we have to make sure that there are versions that work together.

We've done the same thing, on the same way that you have generational sets of instructions that work together, either by a version number or a new product name or a year.

We have the same thing with us, with profiles. RVA is the application profile, RVM is the microcontroller bare-metal profile. They'll both be coming out almost every year, initially, and probably slower as time goes on.

RVA 20 is the stuff that was ratified in 2019. RVA 22 is the stuff that was ratified in 2021. It works for all applications. We can tell the distros, we can tell the upstream projects like the compilers, GCC, LLVM, this is what you go after.

Everybody knows, all the members know. If they're going to do something unique and different, they have to support that themselves. If they want to negotiate with the upstream projects, we don't get in the way, they can go ahead and do that.

The upstream projects know the profiles that are most important. The platforms are very similar, but for operating systems. We want to show it to be able to create a single distro, a single set of bits, people download and configure and work. Things like ABI's, things like Discovery, things like ACPI, all those things are found in the platform.

The same thing will happen, it will come out on a yearly basis. There's, again, an application layer platform, and there's a microcontroller for real-time OSs and bare-bones things. As you might imagine, the bare bones both in the profiles and platform, very sparse.

There's not much in there, because people don't want you to do a whole lot to the point where we had the M extension previously, and that M extension had multiply and divide. They don't divide. It's too expensive in IoT, so we're breaking it down.

We're going to have a separate multiply extension that people can go ahead and use. Both of them are optional down the bottom. We've provided a way that all the upstream things can go ahead and deal with it, all the distros can deal with it. Then, people can jump on board and use those things.

Ultimately, the goal is simple, be able to take the application that was compiled for one implementation and have it run on another implementation, have them produce the same results within the bounds of things like timing and other things like that.

Same thing as operating systems, one set of bits will be able to download multiple implementations, configure it, and have it work. That's how we're working on constricting fragmentation and giving you a tool to be able to do it. Again, the only reason for people who want to fragment is so that they can share.

Gordon: It was Dave Patterson who made a comment in "Meet the board" before the RISC-V Summit for a lot of uses. You alluded to this with IoT devices. The certain microprocessor compatibility like you've had with that x86, is often not the right lens through which to look at RISC-V. It can be, of course, but it isn't, necessarily.

Mark: Even those guys want to share things. They're not going to want to do their compiler from scratch, but they're using the base 47 instructions, instead of all the rest of the extensions. They don't care about those because of exactly what you said.

Again, the thing that brings people together are common things that they have to do over and over again. I'll give you one very simple example, working on something called Fast Interrupts right now, what does it mean? It's shortening the pathway to get into an interrupt handler, not having to save and restore all the registers for embedded.

That's what it's for. Very simple. All the embedded guys are in there, even though they're doing their own thing. They want to agree on one set of calling conventions and make it easy for them to do that.

That's not something that they're using for interoperability between their parts. That's something they're using, so they don't have to duplicate the work between the companies.

Gordon: Let me ask you a couple of related questions. The first of them is, where were the initial wins for RISC-V? A related question is, have there been wins with RISC-V that you didn't expect?

Mark: First of all, remember, we don't collect any reporting information. We don't require that somebody tell us how many cores, what they're used for, or anything like that. Anything we get is anecdotal.

The other thing is we don't announce for anybody. It's not our job to do that. We'll help amplify. We have a place on the RISC-V website for everybody to advertise for free? Whether I remember or not, called the RISC-V Exchange. All that's wonderful.

The stuff we hear is when we have side meetings at conferences, like the summit and stuff like that. We know that there's more design wins and deployments that we know of in the IoT embedded space, again, because of the runway. It's not a general-purpose computer.

One that's exciting that people may not realize, is that a lot of the earbud manufacturers, especially out of China, are using RISC-V as their core. One is called Bluetrum, now remember, probably tens of millions of units per month with RISC-V cores. That's exciting to me.

I think that again, it's one of those things where it shows off the ability to take a RISC-V core, do something with it quickly, and get it out there. I have in my house 85 WiFi connected devices with switches and outlets and doorbells and gates and garages and all that stuff. 10 percent of them are Espressif.

Espressif, again, a member. They have gone ahead and gone and produced the RISC-V. You can see the RISC-V module, home automation stuff. There's a lot of things that are showing up and a lot of places that we may not hear about right away.

We hear about secondarily, that are A, a surprise, but B, exciting, and C, what it does is it engender success. When people see other people being successful doing this, they go and say, "Hey, I can do this, too." I think that that's amazing.

Again, you're going to see this continue up the chain. There are exceptions like Alibaba doing their cloud server, the servers are a little bit further out. The HPC guys are actively working in European processor initiative, Barcelona Supercomputer Center. All those guys are working on stuff. We know that the United States government in various places is working on things.

The gentleman who runs our technology sector committee, this guy named John Liddell from Tactical Labs in Texas. He works with various government organizations, and has simple things like Jenkins rigs to do test for RISC-V and stuff like that.

There's a lot of work that goes in various areas, but I don't think there's a single part of computer science that isn't looking at RISC-V for something or another, whether it be a specialized processor to help them do security or processing for ETL, or something like that, or something that's a general-purpose thing. It's everywhere.

You're going to see more and more products come out over time. We're not the only ones who are taking a look at how much it's coming out. All the state or analysts have patent numbers, and they're predicting 50 billion to 150 billion range of cores out there in a very short period of time. It's going to grow as people see that it's an easy thing to do.

Gordon: What is your role at RISC-V? What do you see your primary mission as being?

Mark: I like to make things simple. The most important thing for me is the proliferation of RISC-V cores for profit. That has to be the thing that stays in your mind. In the short term, my goal is to get people over the goal line with the pieces they need to get over the goal line with.

In 2020, we produced one spec, in 2021 we did 16. That's through the effort of me and everybody else in the team in order to prioritize, put governance in place, get them help where they needed help, and try to push things over the goal line. Get those specs out there that the members care about in order to make their customers successful.

Then, finally, the ecosystem. Look, without compilers, without optimizers, without libraries, without hypervisors, without operating, it's just, it doesn't matter. It doesn't matter how good your ISA is. Having all those pieces there is really important.

I'm a software guy, and they hired a software guy to do this job because of that. I've worked in the NSA, but I understand software everywhere from boot loaders up to applications.

I've worked all those pieces. It's really critical, and you're going to see us provide even more emphasis over that. That's been the greatest growth area in our groups over the last year, and you're going to see continued effort by the community.

Gordon: I think you've maybe just kind of answered this, but if you look out in a year, two years, what does success look like or conversely, what would you consider to be flashing alarm lights or bells going off?

Mark: One of the things that we haven't done up until now is really put a concerted effort after industries. A lot of it has been really bottoms up, "Hey, we need an adder, right? We need multiply. We need vector, right?" Those are things we go, "Hey, other architectures have this."

Now, we're really starting to take a look from the board, to the steering committee, down through the groups at things like automotive, at things like data center, at finance, at oil and gas, at industries and trying to take a look holistically at what they need to succeed.

Some of it's going to be ISA. Some of its going to be partnering with some of these other entities out there. Some of it's going to be software ecosystem. The goal is to not peanut butter spread our efforts to a point where nobody can be successful in any industry, right?

It's important we say, "OK, you're doing automotive." All of a sudden, you have to look at ASIL and all these ISO standards, functional safety, blah, blah, blah, and we have to make sure that stuff occurs. We have a functional safety SIG by the way.

Success, to me, looks like continued deployment of cores that are sold for profit, and then starting to attack some of these industries holistically that need these pieces and make sure that all the pieces they need inside of RISC-V are there and working and completed.

Gordon: Well, thank you very much. Is there anything else you'd like to add?

Mark: Well, again, I think the biggest thing is just a big thank you to you and the rest of the community of being inquisitive and participating and joining the contributor culture, and helping make RISC-V a success. We're always looking for people to help us and join us, so look at If you have any questions, send mail to Thank you very much.

Gordon: Other than just going to, are there any particular resources that somebody listening to this podcast might want to consider looking at?

Mark: If they're very tech, under the, there's a tech tab. Underneath there, there's a tech wiki. That sends pointers to GitHub with all the specs, to the upstream projects, GCC, LLVM, our governance, all those things. It gives you a really good jumping-off point. There's a getting started guide there as well for tech guys.

In general, if you're not a member, become a member. It's really easy. If you're an individual, you can become a member for free. If you're a small corporation just starting out, we have some breaks. There's different levels of membership, strategic, premier TSC, premier. Come join us. Help us change the world. This is really different.

I had no clue what this was when I joined it. I'm very grateful, and I'm very happy to see it really is making a very big difference in the world.

Wednesday, December 01, 2021

The Open Source Security Foundation with Brian Behlendorf

The Open Source Security Foundation (OpenSSF) is a fairly new organization under the Linux Foundation focusing on open source software security with an initial primary focus on software supply chain security. Brian Behlendorf recently moved over from the Hyperledger Foundation which he headed up to take over as General Manager of OpenSSF.

I was able to grab a few minutes with Brian at the Linux Foundation Member Summit at the beginning of November. We talked about the genesis of OpenSSF, his initial priorities, how to influence behaviors around security, and what sorts of "carrots" might help developers to develop more secure software.

Some links to topics discussed in the podcast:

Listen to the podcast [MP3 - 16:26]

[Transcript in process]

Gordon Haff:  Hi, everyone. This is Gordon Haff, technology at get Red Hat. I'm pleased to be here at the Linux Foundation Member Summit in very nice Napa, California, with Brian Behlendorf, who's the newly minted general manager of the Open Source Security Foundation. 

Brian, what is the Open Source Security Foundation? What was the impetus behind creating this?

Brian:  Over a year ago, actually, just before the pandemic started [laughs] simultaneously at two different firms—at GitHub and at Google—a small group of companies got together with each starting to really think about this problem of application security and dependencies and what happens during build time and distribution.

Like all these blind spots that we have in the open source community around how code is built and deployed and makes its way to the end users. It's funny how both started simultaneously. Then, people realized it'd probably be better to combine forces on something like this. There wasn't any budget, there wasn't really any clear ownership.

The Linux Foundation stepped in, partly at the behest of these companies, and came to be a home for the informal collaboration around ideas around what really could be done here. 

Then, that group came up with: Let's focus on developer identity and signatures on releases and things. That became a working group.

Let's look at best practices and education materials: That became the best practices working group.

Six different working groups were formed and a bunch of projects underneath.

Then, some momentum started to build and a recognition that there might be some systematic ways to address these gaps in across that entire lifecycle across, code coming out of a developer's head going into an IDE, them choosing the dependencies to build upon and then all the way to distribution.

There are all these points of intervention, places where there could be improvements made. That became the Open Source Security Foundation. Then, after about a year of this mapping the landscape and figuring out what to do, it was clear that there were some places where some funding could be applied.

In the typical Linux Foundation fashion, we said, "Well, let's see who's interested in solving this problem together." There are a bunch of existing organizations, about 60 or so. Most of those and a whole bunch of new ones came together and agreed to pool some funds.

Which ended up being over $10 million to go and tackle this space. Not with the specific idea of, "We're going to build this product," or "We're going to solve this one thing," but a more general purpose like, "Let's uplift the whole of the open source ecosystem."

"Raising the bar" is one way to think of it, but "Raising the floor" is a phrase that I think I prefer better.

As a momentum got together around a proper funded entity, I had been concerned about this space for a long time, working and leading the Hyperledger initiative as executive director for that, as well as Linux Foundation Public Health, and I said "I'm happy to help on this, and I probably should," so I jumped over to lead this as executive director.

We launched that in mid October, announced the funding, and I have our first governing board meeting tomorrow, Friday.

Gordon:  Good luck with that. Not to make this too inside baseball, but Linux Foundation had had their Core Infrastructure Initiative that was kicked off, I guess, by Heartbleed and some of those problems, and it seems the focus has shifted a bit.

Brian:  The Heartbleed bug was specifically in OpenSSL. Jim Zemlin, my boss, went around and passed the hat and did raise a healthy chunk of funds to try to expand it beyond the two developers named Steve.. in their spare time, or in their consulting time, I think, to try to be a larger community. I think that had some success. I think, and we've had other initiatives like this CII Badging effort, which is now being rolled into OpenSSF, lots of focus on security in the Linux kernel efforts, so there've been these different security initiatives.

Oh, and a really big one has been SPDX, which started life initially focused on licensing and making sure that this big tarball of code I have and all the dependencies are appropriately licensed and appropriately open source, "I'm following all the rules," and that kind of thing.

Now people are realizing, "Oh, it's easy to extend this to a proper SBOM type of scenario." I can understand if I've got these versions of code, which ones are vulnerable, and really helps with the auditing and understanding not just in this tarball, but in my entire enterprise, "Where might I be vulnerable to outstanding CVEs that have been fixed by updates?" and that kind of thing.

The SPDX effort, that's now an ISO Standard, that's something rolled under this, but it's another Linux Foundation complementary effort. With this, I think we're trying to be more systematic about what are the tooling, what are the specifications, what are the standards?

Also, what's some training we can do? What are ways to help the individual open source projects, even outside the Linux Foundation, have better processes and be better supported in prioritizing security.

Gordon:  A lot of what I've heard about security at this event has been around supply chain security. Obviously, security covers a lot of stuff. It at least appears that your real focus initially is on this supply chain security.

Brian:  It's funny. I wanted to call it supply software NFTs, but I got shut down on that. Somebody told me earlier this week, we used to call SCM, Software Configuration Management. In fact, it was source code management tools, like GitHub or Git and Subversion. Others have long been about having that picture of where did software come from.

The metaphor of the supply chain, not only our supply chains hot because of the ships sitting off the Port of Long Beach. Also, this recognition that software does have a journey that we are building on top of open source components, so much more than other, than previously.

I remember 25 years ago, when I was first getting involved, you think about what the dependencies were in Apache httpd, it was like glibc. Whatever the operating system provided, it was pretty minimal.

These days, open source packages will have thousands of dependencies. Partly, because developers who push to npm and PyPI in places or do tiny packages around 10 lines of code. You aggregate all these together, and it ends up being much harder to audit, much harder to know if you're using updated versions of than usual.

The framing of a supply chain seemed to better crystallize the fact that that is a whole lot of different parties that touch this stuff. It also helps characterize that this is an issue that industry cares about, that is global in nature, and which governments are starting to care about now too.

I'd say one of the big galvanizers for getting this to be a funded initiative was the White House executive order back in May. Calling for the technology industry, not just open source, but the technology industry to get better about supply chain security as to address the kinds of vulnerabilities exploited in the hack of SolarWinds, and other major famous breaches in the last few years.

Gordon:  I like your reaction to how well understood this problem is. I've seen numbers that were all over the place. Linux Foundation has some numbers to indicate that wiith the executive order, maybe things weren't so bad in awareness point of view. However, Red Hat runs this Global Tech Outlook survey, for a few years. We asked about funding priorities for security, third party supply chain basically was the bottom of the barrel at 10 percent.

What's your reaction? What are we seeing here?

Brian:  Security is so hard to price. You ask somebody, "Do you want to use secure software?" Nobody says no. What objective metric do we have to know for secure enough? Other than, "Have you been hacked recently?" "Are you aware that you've been hacked recently?"

If your answer is, "I've not been hacked." Your answer is probably...You're not aware that you've been hacked. We really are lousy at coming up with objective ways to say we've hit a score when it comes to the security, the software, or the risk around a breach, or that kind of thing. We do know for sure when we lack a feature that isn't generating revenue for us.

In product roadmaps, whether we're talking about commercial software, even open source software, feature work tends to win out over paying off technical debt when tends to win out over updating dependencies tends to...

It's a shame even though people say they prioritize this, it's hard to do that. One of the things I've been thinking about as I've dived into this is how do we get security to be not like a checklist, not like burdensome bureaucratic kind of thing that developers feel they have to follow, but instead to have a set of carrots that would incentivize devs to add that extra added information, to update their dependencies more often, to make it easier for their own end users to update.

There's some software packages that make updates smooth that respect: Legacy APIs that don't change things a lot. There's others where every minor point release ends up being a rather disruptive update.

One of those things that might be imagined is if cloud providers...Well, first off through the work that communities are doing through SLSA and Sigstore through some of the other specifications work.

We'll get to the point where you'll be able to start to generate relative, perhaps even in some absolute metric, the integrity, the trustworthiness, and the risk profile of one tarball versus another, or one collection of deployed software versus another, one system image versus another.

I think cloud providers might be in a position to start to say, "Hey, we will charge less money to run this image if it has a lower risk associated with it, if there's better attestation around the software, if it's less a collection of one offs done by people in their spare time and more something that's been vetted, something that's been reviewed.

"Something that is pretty well established with minor improvements or something versus this other thing." If we can get incentives by the cloud host to charge less for that, a far off future might even be insurance companies. How do we manage risk out there in the real world?

It tends to be by buying insurance to cover our costs if our car dies on us or we have a health scare or something like that. The pricing of premiums in insurance markets is one way to influence certain behaviors. It's one reason people stopped smoking because their health insurance premiums went up if they kept smoking.

Is there a way to make [laughs] tolerating vulnerabilities in software like smoking where you can do it, but it's going to get expensive for you? Instead, if you just updated that underlying version, your cost would come down. Maybe this is a path to getting that to matter more in people's roadmaps.

Gordon:  I guess another way to ask same thing, is this a matter of people need to do this, but it's going to be expensive? They're going to have spent a lot of money to do this. Or is it they need to do it, but it doesn't necessarily need to be that onerous?

Brian:  Signing your releases, having a picture of when dependencies that you depend upon are vulnerable. It might be worth updating them. There's a whole batch of activities we can do to make the development tools and the way stuff gets deployed embody these specifications and these principles out of the gate so that stuff automatically happens. The right thing automatically happens.

There's improvements we could make in the standard software dev tools out there. Maybe even in places like GitHub, GitLab, and that kind of thing to make the cost of adopting these things really low for developers. Make it the due default. Make it the norm in the same way that accessing a TLS website today is the norm. It's almost unusual to go to one without a TLS certificate.

You'll get warned away now in current versions of Chrome. We've got to do that at the same time as recreate incentives to do the things where there's unavoidably a cost. When you update an underlying dependency, it's almost never zero cost. What's a reason to do that? There have to be a series of carrots, as you call it, and hopefully very few sticks.

Finally, though, given the government interest in this domain, you will start to see executive orders like we saw even today, I believe it was, or yesterday. There was a White House executive order telling all the federal agencies they got to update the firmware on routers and deal with this specific set of outstanding known vulnerabilities or shut their systems down.

That's ultimately what you have to do when you're running a old code that's unsafe. We might also start to see regulated industries like finance or insurance. The regulators might start to say, "Hey, if you're running code that hasn't been updated in five years, you are a clear and present danger to everyone else in the ecosystem. Shape up or ship out."

It'll be interesting to see if this starts to be embedded in the systems of the world that way.

Gordon:  Supply chain is your initial focus here. If you're looking a little further out, what are some of the other problems? Where might be the next two or three problem areas in wall attack?

Brian:  I've heard tons of stories about this recently. I think it's a pretty well accepted trope that I don't have like a metrics on this. Application security and secure software development are not really taught in computer science courses whether they were talking about the current education schools or the code academies, code camps, or even other spaces. We don't teach a lot of this stuff.

Of course, we're already inside of OpenSSF, but how do we get that into the standard CS curriculum, all the code academies, and those kinds of things? It's an important thing to do and figure out how to reward and recognize people for accomplishing that.

Gordon:  Anything else you'd like to share with our listeners?

Brian:  The Open Source Security Foundation is still pretty young. There's still lots of different touchpoints, lots of different things that we're either working on or thinking about working on. We now have some resources to go and apply to different domains.

If you are interested in this domain, if you've got a project you think is worthy of bringing to OpenSSF, or if you care about this for your own open source project, please come to Please come engage in the working group. Working groups are the primary unit of work [laughs] in our community.

We'd love to have you in no matter what level you're at in terms of expertise. We really want to help the entirety of the open source ecosystem uplevel in this space. All are welcome.

Thursday, August 05, 2021

Integration testing and Testcontainers with Richard North


Richard North is the creator of the popular open source integration testing library, Testcontainers, and former chief engineer at Deloitte Digital. I caught up with Richard shortly after the company he co-founded and of which he is CTO, AtomicJar, emerged from stealth with a $4M seed funding round led by boldstart ventures.

Although AtomicJar has not yet announced a product, Richard said on the podcast that it will be a SaaS product that extends and complements Testcontainers. Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in an OCI/Docker container.

In this podcast, we discuss some of the challenges associated with integration testing and how Testcontainers came into being as an open source project to address some of the key pain points.

Listen to the podcast [MP3 - 10:34]

Tuesday, June 22, 2021

Using open source to help the community and drive engagement at Mux

In this podcast, Mux co-founders Steve Heffernan and Matt McClure talk open source strategy and making contributors feel rewarded. They also delve into why video is so hard and how a community working on it helps to solve the hard problems.

  • Demuxed 2021 conference for video engineers

Listen to the podcast [23:11 - MP3]

Wednesday, June 02, 2021

OpenSLO with Ian Bartholomew

 In this podcast, I speak with Ian Bartholomew of Nobl9 about the release of OpenSLO as an open source project under the Apache 2 (APLv2) license. The company describes OpenSLO as the industry’s first standard SLO (service level objective) specification.

In this podcast, we discuss:

  • What site reliability engineering is and how it relates to more traditional sysadmins
  • Trends in observability
  • Why the company decided to make the spec open source
  • How the project thinks about success

Access OpenSLO on GitHub

Listen to the podcast - MP3 [14:25]

Friday, May 14, 2021

A far ranging AI discussion with Irving Wladawsky-Berger

I've known Irving since his days at IBM running Linux strategy. Since he "retired," he's been busy with many things, including a number of roles at MIT, where I've kept touch with him including through the MIT Sloan CIO Symposium. We were emailing back and forth last week and discovered that that we've been on something of a similar wavelength with respect to AI. Irving just wrote a blog post "Will AI Ever Be Smarter Than a Baby?" which delved into some of the same topics and concerns that I covered in a presentation at earlier this year.

In this discussion, we explored the question of the nature of intelligence, the answer to which seems to go well beyond what is covered by deep learning (which to put it way too simplistically is in some respects a 1980s technique enabled by modern hardware). 

Among the topics that we explore in this podcast:

The two notions of intelligence. Classifying/recognizing/predicting data and explaining/understanding/modeling the world, which is complementary but potentially much more powerful. 

Whether we need to bring in a stronger element of human cognition (or really even learning/problem solving as we see in the animal kingdom) to take the next steps? And the related work in cognitive science by researchers like Alison Gopnik at Berkeley and Josh Tenenbaum at MIT.

Have we been seduced by great but bounded progress? Can we get to Level 5 autonomous driving?

What will the next 10 years look like?

Listen to the podcast - MP3 [40:29]

Thursday, April 01, 2021

Metrics with Martin Mao of Chronosphere

Martin Mao is co-founder and CEO of Chronosphere, a company that offers a hosted SaaS monitoring service. 

In this podcast, he discusses:

  • The observability landscape
  • The rise of Prometheus
  • The role of open source
  • What happens when instrumentation is built into everything cloud-native in a standardized way

In an earlier podcast, I spoke with Martin about the challenges of open sourcing an internal company project,

Listen to podcast [MP3 - 21:19]

Monday, March 22, 2021

Render, PaaS, and open source with Anurag Goel

Anurag Goel, an early Stripe employee, co-founded Render which puts a PaaS layer on top of Kubernetes. In this interview, we talk about: 

  • How things are different for PaaS than in its v1 days
  • How he thinks about what being "opinionated" means in the context of PaaSs
  • The importance of user experience
  • The benefits for everyone of open sourcing software components

Listen to the MP3 [25:56]