Tuesday, August 07, 2018

What was new at Serverlessconf?

Cue the obligatory “There still servers in serverless” and “There is no cloud; it’s just someone else’s computer.” Like many, I’m not a fan of the term but it’s seemingly here to stay. I’ll get over it, just like I did with private cloud. In any case, the more interesting question is what’s going on with serverless—which is what took me to Serverlessconf in cool, gray San Francisco last week.

This is the point where it makes sense to introduce serverless for anyone who may have heard of the term but hasn’t studied it closely. And that turns out to also be a good segue for discussing the conference as a whole. As recently as last December, I was on a panel and a member of the audience asked us about the difference between serverless and Function-as-a-Service. We were able to offer the beginnings of an answer at the time—not that everyone was quite singing from the same hymnal—but my sense is that most everyone is in alignment now, with a caveat that I’ll get to presently.

So what are serverless and FaaS? For that I’ll turn to a recent blog post by my Red Hat colleague William Markito Oliveira who wrote in a recent blog post:

Functions-as-a-Service (FaaS) is an event-driven computing execution model that runs in stateless containers and those functions manage server-side logic and state through the use of services. Serverless is the architectural pattern that describe applications that combine FaaS and those hosted (managed) services. MartinFowler.com has a great article that provides more details and the origin of the terms.

In other words, FaaS is one component of serverless. But an application written using serverless patterns will also generally use a variety of standard building blocks to provide common services across many applications. Databases, authentication, and proxies are examples of services that many different applications require. These will be managed by an operations team; from a developer’s perspective, it doesn’t matter if that team works for the same company and runs them on-premise or if the team is employed elsewhere, such as a public cloud provider.

This brings us to my caveat. While most everyone agrees on what makes for a serverless architectural pattern, there’s far less unanimity on the degree to which this pattern mostly applies to public clouds only, fits with various hybrid application development and deployment models, and over what timeframe the assumed shift to increased serverless usage takes place.

Serverlessconf itself rotates fairly far on the public cloud angle. But that really shouldn’t be surprising. The organizers of the show are a training organization focused on public clouds. Of course, they’re not the only ones to see serverless as part and parcel of the rich set of public cloud services that complement FaaS on public clouds. The finer pricing granularity of many of these cloud services (including FaaS) relative to per-hour or even per-minute on Amazon Web Services EC2, for example, is also seen as a feature that doesn’t really translate to an on-premise environment. (That said, there was far more emphasis at this conference on developer productivity advantages than on pricing models; I want to say this represents at least a shift of degree from this conference in New York last year.)

At the same time, a number of talks expressed a pragmatic recognition that serverless and FaaS isn’t for everyone, at least today. Amiram Shachar disused “Shipping Containers as Functions.” Yochay Kiriaty pointed out “Mistakes and Anti-Patterns in Serverless (or when NOT to use Serverless).” Kiriaty noted that serverless best fits with a specific async and event-driven programming model. Other characteristics that are mostly needed to benefit from serverless include: stateless logic, idempotence of functions, one task per function, and functions that finish quickly and avoid recursion.

More generally, Erica Windisch argued that 12 factor provided guidelines. But serverless enforces them. 12 factor is most associated with early hosted platform-as-a-service, so this may give you some sense of the type of applications that are primary serverless targets today.

A tweet from Andrew Clay Shafer that I assume was inspired by this or another talk stated: “Serverless is a particularly opinionated PaaS.” This is worth pondering, if only because I think there are a lot of hard and fast lines being drawn around what are essentially architectural patterns: serverless, containers, PaaS, VMs, IaaS, CaaS (containers-as-a-service) whereas it’s more of a continuum with adjacencies that blend into each other and combine features. A topic for another day.

Wednesday, July 25, 2018

Google Next: Enterprise, ML/AI, open source, and hybrid

Things I learned at Google Next (or at least I think I did). Think of these as preliminary observations based on sessions I watched, people I spoke with, or tweets I read/replied to.

One thing about the show reminded me of an AWS re:Invent from four or so years ago. Earlier re:Invents mostly trotted the usual suspects up on stage: Netflix, SmugMug, other startups that probably are no longer with us. Suddenly, we had NASDAQ and other well-known large enterprise customers who demanded mission-critical out of their infrastructure. This year at Google Next saw a similar transformation. There were young companies doing cool stuff; Indonesia's Go-Jek made a particular splash. But there were also plenty of speakers from companies like Nielsen and Target (which is leaving AWS in favor of Google). [ADDED: At least that was the image projected from the main tent stage. As a colleague correctly noted to me, the show floor told a much more startup-centric story relative to where AWS and Microsoft Azure are today.]

I'm not sure I heard any direct, or even oblique, references to competitors but I think it was pretty clear where Google thinks its differentiation lies. The most prominent area was ML and AI which was omnipresent. There were some explicit announcements of ML services, mostly aimed at making ML more accessible to the millions of developers who are not data scientists. But elements of AI/ML were pervasive whether as part of Google Maps or GSuite. The second area was open source. Open source as a central strategy was strongly reflected in the small Community day I was invited to (and gave a lightning talk at) on Monday but it was also front-and-center on the opening day keynotes as well.

Google announced that Cloud Functions was GA and took the covers off Knative. (Knative essentially helps create a common building block for serverless on top of Kubernetes across hybrid clouds. My colleague William Markito Oliveira has a nice piece up that discusses FaaS, serverless, and Knative in more detail.) Google sort of soft-pedaled serverless (which I'll use as the general term even if I don't like it) though. They announced Knative on Day 1 in press release. But serverless only got a short segment in the Day 2 keynote. I'm not sure what to make of it. One theory I heard, which I sort of like, is that given AWS' FaaS time-to-market and mindshare lead with Lambda, Google is taking advantage of Kubernetes' container mindshare to enter the market through that door rather than take on Lambda directly.

To expand on the previous point slightly, here's a quote from the Day 2 keynote: "Containers are the universal platform for cloud." I'm pretty sure neither Microsoft nor AWS would make that statement. And I think I'm not over-analyzing to say that Google views serverless as part-and-parcel of a broader container infrastructure and cloud-native app dev environment as opposed to a discrete technology. For what it's worth, I agree with this view. I think there's too much drawing of hard lines between these different approaches to writing and running services going on--but that's a topic for another day.

Finally, the hybrid thing. Google announced an on-prem version of its Google Kubernetes Engine (GKE). First of all, I think I should be getting royalties for some of their messaging; it sounds a lot like various things I've written over the years. But I digress. It's a good story and one with which I obviously agree. There's clearly an appetite for being able to run workloads portably across different environments. But I'd just observe that this is very new territory for Google. Enterprise customers bring a lot of quirks, integration needs, and customization requests to their in-house infrastructure. Heck, if they are happy with a fully standardized offering, they probably should be looking at just using a public cloud. So, strategically this makes sense. But it's not really in Google's wheelhouse and they may find this sort of offering less amenable to the sort of technical solution they're accustomed to creating.

More to come but these are some observations after the first couple of days.

Wednesday, July 11, 2018

Links for 07-11-2018

Thursday, June 21, 2018

Podcast: Patrick Maddox of Twistlock on container security

Twistlock does lifecycle vulnerability and compliance management for containers. In this podcast, we talk about balancing the needs of developing fast and operating securely, including how to use tooling as a common framework for driving discussions between those writing apps and those responsible for operating them.

Links:

 

Listen to MP3 [13:09]

Listen to OGG [13:09]

Wednesday, March 14, 2018

Podcast: Arpit Joshipura on open source networking

Arpit Joshipura, general manager of networking at The Linux Foundation, takes us through the evolution of networking from its proprietary beginning. He describes the networking layers that make up the full stack and explains how technical capabilities like disaggregation, the software-defined phenomenon more broadly, and virtual functions have led to the big changes in the telco world and increasingly networking more broadly that we see today.

Listen to podcast MP3 [00:12:25]

Listen to podcast OGG [12:25]

Screen Shot 2018 03 14 at 2 36 05 PM

Tuesday, March 13, 2018

Huawei Chief Strategy Officer Bryan Che talks the China market and open source

Bryan

I worked for Bryan Che for my first seven years at Red Hat. We caught up with each other at the Open Source Leadership Summit in Sonoma. Bryan thought it would be interesting to share his observations and experiences on open source adoption in China, the China cloud market and cloud portability, and what it's like living in China. One of the interesting dynamics we discuss is that China is starting afresh in areas such as public clouds where, in the US, patterns were well-established before the current generation of open source software became available. 

Listen to podcast MP3 [00:22:26]

Listen to podcast OGG [00:22:26]

[Transcript:]

Gordon Haff:  Today I am joined at the Open Source Leadership Summit in Sonoma by Bryan Che. Bryan Che is actually the person who hired me on at Red Hat, about eight years ago.

He is now working at Huawei in ChinaHe's going to give us some insights about the China market, about open source in China, his personal experiences in China, and things in transformation. Welcome, Bryan.

Bryan Che:  Thanks, Gordon. It's great to see you again here, and a lot of old friends from Red Hat, as well as many new friends from the area.

Gordon:  Maybe you can start by introducing yourself, and tell us about your journey.

Bryan:  I'm currently the Chief Strategy Officer at Huawei. I'm working out in their headquarters in Shenzhen, China, which borders with Hong Kong and southern China. I was at Red Hat for 15 years. Before that, straight out of MIT, where I did my bachelor's and master's in computer science. Spent the last 20 odd years in Boston, and never figured I'd move overseas to work.

This was a really interesting opportunity to be able to work in open source software, but also in everything from consumer electronics, to telecommunications and hardware, and also see a lot of the things that were happening on the other side of the world. I'd read about it. I'd visited a few times ‑‑ on business trips and vacations ‑‑ but it's another thing to actually be in the midst of it, day by day.

Gordon:  We were talking the other night about the China market, and how it's so very different from ‑‑ it's probably fair to say ‑‑ every place else in the world. I thought it'd be interesting, now that you've had some time, some perspective...With your work, you've obviously spent a lot of time looking at it in depth.

What is the few‑minute‑or‑so summary of how you see the Chinese market around telecoms, around computing, around consumer apps...all of that kind of stuff?

Bryan:  I think one of the fascinating things for me that I've observed, is that technology space is really booming. A lot of the really high‑level themes are similar to what we've seen in the US, in Europe, and elsewhere. There's a big focus on digital transformation, on machine learning and AI, and IoT.

The way these technologies are evolving and being deployed is very, very different from what I've seen elsewhere. Just as one example in the public cloud space, we've had AWS and a lot of these other public clouds like Google and Microsoft and IBM and so on starting 10, 11 years ago in the US. It's a more recent market in China.

One of the things that's been really interesting for me as I've been working with our cloud business in terms of their public cloud strategy and what we do around open source is that if you had the opportunity to do public cloud again, now with 10 years hindsight and now with all these new open source technologies and architectures and microservices and Kubernetes and OpenStack, what would you do differently?

Being able to have the chance to do that in China where adoption is growing very, very quickly in the public cloud standpoint, but is still relatively new compared to the adoption in the US and other parts, it provides a really fun thought experiment as well as a way to be able to try new things.

On the other hand, we're also seeing because China doesn't have as much incumbent technology, in many cases, it's leapfrogging a lot of the things that I've experienced here in the US.

Just as one example, at Huawei, we've been working with some of the bike sharing companies like Ofo. If you come around Shenzhen or Shanghai or Beijing or any of the major cities around China, you'll see all these bikes everywhere.

They're just parked in the streets and not at a dock. One of the things that has become really popular is you can just pick up a bike, unlock it with your smartphone, ride it to somewhere else and then immediately drop it off there.

If you take a look at what they've had to enable from a digital standpoint to do those kinds of things, first off, they have to be able to track where all the bicycles are and so they built in all these new sensors and chipsets into them. They have to work indoors and congested areas. They've got to be able to work with their phones in an easy way to do mobile payments around that.

Then, you've got to be able to have the logistics necessary to manage OK, if they're not just going from a fixed point A to B, how do you manage all that?

These kinds of things where it's been sort of leading in China as opposed to first coming from the US over to the China market, it's interesting to see how digital technology is really changing a lot of people's lives in ways that are very different compared to other parts of the world.

Gordon:  It seems particularly when you talk about payment methods and things like that, there is a huge amount of inertia and custom and we've always done it this way. I always find it interesting even between countries in Europe, there's very different attitudes towards cash and credit cards and things like that.

Of course, the residents of any given country, whether it's the US or the UK or Sweden, just can't understand how messed up the rest of the world is compared to what's obviously the right thing. China's had an amazing transformation. Just really, I think, totally different from any place else in that regard.

Bryan:  Yeah. Most of the people that I interact with have gone completely cashless in Shenzhen and in Greater China. For example, just to give one story about how mobile payments has totally integrated, when I go to a restaurant, sitting on the restaurant table will be QR codes.

You scan the QR code and up shows the menu on your phone. There'll be pictures of all the dishes. Sometimes, there'll be reviews from other people. You just order the dishes straight from your phone.

Because there's a QR code for every single seat, the restaurant knows where you're sitting so the food just arrives at your table. You pay for the food directly on your phone, so you never have to get a bill and it just goes away.

The entire process of how do you get your food, how do you order it, how do you pay for it is totally streamlined and integrated into your phone. Afterwards, if you want to split the bill with somebody else, you can just send money from your phone to somebody else's.

There's ways to be able to have groups set up so you just automatically send money to everyone at the same time. It's really, really easy to be able to do all these things.

When you think about the US process, typically you pay by credit card, you have to add your tip, you've got to wait for somebody to bring your menu and order for you. It's taken a lot of the friction out of even just how do you sit down and order a meal.

Gordon:  I imagine it must be very frustrating for a lot of Silicon Valley startups because they can do the backend infrastructure for this sort of thing, but people just don't do it because habits are established. Let's switch to you personally.

What's it been like? This was a big change for you from Cambridge Massachusetts and Red Hat right out of school to moving to Hong Kong.

Bryan:  Yeah, it's been really different. It's been really fun. Hong Kong is a world‑class city with tons of food. Everyone who knows me knows that I like to go out and eat and sample things. Then, Shenzhen is a really fascinating city just with all the hyper growth and technology that's going on there.

Working within a Chinese company has also been different. Working in the US, of course, for example, everyone speaks English. All my communications were in English and emails.

Working at a Chinese company, the vast majority of my emails come across in written Chinese. My Mandarin, it's spoken Mandarin. It's not very good. I've actually become super dependent on machine learning and AI.

Huawei has built out its own internal machine learning trained translation tools. Every day, I'm sitting on those tools translating everything back and forth between Mandarin and English. That's been one change.

Other things, obviously, when I go to the company cafeteria, I'm not eating salad bars anymore every day but trying all sorts of other things like that. Some of the other ways that people work over there, the company cultural values I find a little bit different from the US.

Just one thing that really surprised me, in the US, every tech company talks about being customer‑centric. In the US, when we talk about being customer‑centric, that means we build the best product for our customers, we listen to them, we pay attention to them, we provide good customer support.

At Huawei, their number one priority is also being customer‑centric and being dedicated to customers. But when I started learning about what it means to be customer‑centric at Huawei, we have a very different view on that compared to any place I've seen in the US.

For example, all the stories that Huawei has started telling us when I went to orientation about what it means to be customer‑centric is things like during the Ebola crisis in Africa a few years ago, all the Huawei engineers voluntarily stayed behind and didn't evacuate because they wanted to keep the mobile communication infrastructures up and running in Africa.

Or during the tsunami and subsequent nuclear crisis in Japan a few years ago, Huawei engineers were the very first ones to go back to Japan and Tokyo and other parts where the radiation was really high in order to reset up their infrastructure.

Huawei is able to be successful because they stick by their customers even at personal costs. It's about know how dedicated are you to your customer and how loyal are you to them when they talk about being customer‑centric.

A very, very different way of looking at the world, which I just found really surprising and fascinating.

Gordon:  We're at the Open Source Leadership Summit. Let's talk a little more about open source. What's the open source story in China?

Bryan:  Open source has been booming in China. Last year, for example, a lot of the key conferences by Linux Foundation...They started hosting parallel events over in Beijing, in Shanghai, and other places. Really, really, high attendance.

If you look at a lot of the companies, Huawei was one of the very, very early companies in to open source. Now it's amongst the top contributors to all the major projects, like Kubernetes, and OpenStack, and so on.

It's not just Huawei anymore. We're seeing a lot of other companies that are building their businesses using entirely open source technologies. Many of the Chinese companies are also starting to get involved in the open source community.

I know that when I was at Red Hat and working with you, we often met with a lot of different customers who were asking, "How do we get involved in open source? How do we contribute? How do we strategically adopt open source?"

I'm seeing those exact same conversations happening at many of the companies across China. Many of them are now starting to see open source is a very strategic way to build platforms because they're seeing all the innovations that can be possible and what happens as you start to collaborate together.

Gordon:  Do you see differences in the approach to open source? You mentioned earlier about cloud providers may be doing things a bit differently because they didn't have the 10 years of, "This is how we did things in 2006."

There's a lot of legacy carried forward because of that. When it comes to adopting open source, are you seeing different patterns in China compared to the US, given that it's fair to say, a lot of conservative US companies resisted open source for a long time. Some still are.

Bryan:  I actually think you'll see a higher percentage deployment of open source at many of these companies. Just to use public cloud as an example, the public cloud market, as I mentioned, is relatively nascent. There's many, many different companies also competing in the space, all building public clouds.

Unlike the US market, the vast majority of these public clouds are all built on open source, at the core, using OpenStack, for example. Most of the top clouds in China, whether from Huawei or from Tencent, or from China Mobile, and so on, they're all building on top of OpenStack which is not the case in the US where most of the public clouds preexisted, these open source technologies. They all built their own stuff.

That's just one example where you see, because these platforms came along a little bit later. Then when they saw that, "Hey, somebody's already invented this stuff in open source. Let's just take advantage of that, and then go and build the other things that matter to our customers."

You're starting to see default to open source in many of these places where it wasn't possible, markets where you had a longer term of no incumbency.

Gordon:  Of course, the fact that if you're buying proprietary software, you are probably going to have to buy a lot of it from US companies. I imagine that plays a role in open source as well.

Bryan:  Definitely. One of the great benefits of open source is that it puts you in control of your own destiny. Just like all the typical startups today in the US and elsewhere, open source is very natural for them.

It's very much a startup mentality almost, especially in Shenzhen where there's so many new businesses being formed all the time. There's pretty much a maker's mentality in terms of how do you hack hardware, and how do you hack software?

Open source does play along very, very nicely with a lot of that dynamic in terms of what people are trying to do.

Gordon:  You mentioned language, in your case, earlier on. How are you seeing that affecting...There are a variety of reasons why it's probably harder for someone in China or a company in China to fully interact with a number of the open source communities.

How do you see that playing out?

Bryan:  That can be one of the big challenges. I see both language as well as communication medium as one of the challenges. Obviously, English has become the de facto language everyone uses in open source.

If you come across it in China, it's a little bit uneven in terms of how fluent people are, being able to communicate in English, let alone be able to persuade, or evangelize, or say "This is why this commit should be good" or to take some of the leadership positions. The good thing is that the open source communities have, in general, been welcoming.

The other dynamic is a lot of the communication channels that people typically use in China, WeChat is dominant in terms of the major communication channels.

Then some of the other popular forms, like Twitter or Google, are not even accessible in China. It makes "How do you even connect with each other" a little more difficult at times. There's a couple things that have been happening.

One is that you're starting to see many of the open source foundations, like Linux Foundation here, set up. It's supporting infrastructure in China, a greater Asia‑Pacific, and so on to try to foster a lot of the communities.

One of the effects of that has been now you're starting to see pockets of projects that are initiated by collections of Chinese companies. Then coming into other parts of the world, instead of everything just happening from US, or Europe, or Latin America, or some other part, and then coming back into China.

I think that's good because now it means that contributions, and innovations, and leadership is coming from everywhere. I still think that there's a lot of things to be figured out in terms of how do you best incorporate people from all over the world into a community when the language, and communication mediums, and other things like that are just barriers.

Gordon:  What do you see ahead? You're doing strategy, without giving away any secrets. What are some of the things related to China broadly that you think people should be thinking about in looking out over the next few years? I'm not sure it makes much sense. In this industry, you don't talk about more than a few years.

Bryan:  Huawei, obviously. We're based in China, so we think a lot about China. Huawei is a very interesting company within there in that the vast majority of its business is actually overseas and not in mainland China, plus when I take a look strategically I think across a few different dimensions.

One is from our overall technology portfolio. How do we make it useful for everyone in the world? Obviously, we take advantage of the fact that a different market dynamics exists, whether in China, the UK, or Brazil, or something like that.

The good thing is that a lot of these macro trends around digital transformation, around technology, is like IoT, or machine learning and cloud computing. Those are the same things that are happening everywhere. It's more about, "What is the actual solution deployment that you get into these other areas?"

Based on that, there's a few things that we look at in terms of, "OK, how do we create a good baseline of technologies, or good platforms, that can support all these different areas?" Some of the principles that we think are very useful to build these generic platforms being open is hugely important.

This is why Huawei has invested so much in open source because if we want to go to these different markets and be able to have it adapt, the only way that's possible is to build on an open foundation, and to make it so that people can do what they want with it in their own individual markets.

We also take a look at how do we take the benefits of one market and bring it to the other. A good example is in public cloud. Huawei operates its own public cloud in China, as I mentioned, for the software built on OpenStack on top of the CMCF Stack, and so on.

Then Huawei also sells an OEM, this exact same technology platform to many of our partners around the world. For example, Deutsche Telecom or France Telecom. Telefonica, China Mobile, and many of these large Telcos around the world. All building the same public clouds and operating them in their local markets.

One of the reasons for this is that if you take a look at a lot of the existing public cloud today. The dominant one is in the US, for example, if you just look at AWS, and Microsoft, and Google. First off, Google's not even present in China.

If you look at Amazon and Microsoft, they're much weaker in the China market. They can't even operate their own data centers there.

What we think is the best approach is to say, "Well, let's let the specialists in their own geographies operate their own leading class, public clouds. Let's figure out a way so that everything works together, so that if you want to buy all your cloud capacity through Deutsche Telecom but you happen to be a multi‑national like Volkswagen.

Also, when you need to deploy into data centers in China, you can take advantage of a first‑class operator like Huawei, a first‑class operator in Germany, like DT, versus a cloud provider that happens to be strong in one market but relatively weak or non‑existent in another.

This is our approach to say, "How do we get the best local experience, but on that same technology platform, whether we operate it ourselves, or we resell it, and OEM it with others." It's using that shared platform, but then offering specialized experience to be able to deliver that in a best in class, local customer experience.

Gordon:  As you well know, that's something we see a lot at Red Hat. A lot of people go, "Oh. The public cloud market, that's AWS, Google, and Microsoft. If they're thinking more internationally, “there's a couple of people in China, too, whose names I forget.”

Obviously, we do a lot of business with regional Telcos, regional cloud providers running portable platforms, like OpenShift and, obviously, Red Hat Enterprise Linux

You do have this portable and transferrable experience among clouds in different regions, different countries, what have you.

Bryan:  Absolutely. This is one of the reasons why Red Hat and Huawei have both been partnering as well. Red Hat's open hybrid cloud strategy and portfolio around it, being able to enable customers to use open source technology so they can run their applications in all these different environments.

That's very consistent at Huawei as well as we're looking to enable these platforms to run workloads like Red Hat and others around the world. We very much believe as well that being open and being able to give customers that flexibility and the best in class experience. However they want to use it, that's the most critical thing.

.

Podcast: Open source past, present, and future with the Linux Foundation's Jim Zemlin

Zemlin

Jim Zemlin carved some time out of his busy schedule to sit down with me at the Open Source Leadership Summit in early March 2018. I’ve known Jim since he took this role when the Linux Foundation was formed by the merger of the Free Standards Group and the Open Source Development Labs in 2000, right around the time that I became an industry analyst. 

In this podcast, Jim reflects back on how Linux and open source have evolved, the lessons he and the Linux Foundation (where he is the executive director) have learned, and why open source has become so pervasive. He talks about “the defeatism of free-riding” and how over time, the recognition that there’s business value in collectively-developed software has become increasingly widespread.

Listen to the podcast MP3 [00:18:10]

Listen to the podcast OGG [00:18:10]

[Transcript]

Gordon Haff:  I'm very pleased to be here at the Open Source Leadership Summit with Jim Zemlin who has taken time out of his very, very, very packed schedule here. He's the Executive Director of the Linux Foundation.

Jim, you headed the Free Standards Group until it merged with the Open Source Development Labs to become the Linux Foundation. That was clearly a very different time for open source, generally, and Linux, in particular. I'm probably showing my age as well that I probably met Jim right when he took this position.

Can you describe your involvement early on, how you thought about open source at the time, and how things have changed since then?

Jim:  I grew up in the computing industry, to some degree. My father was a computer programmer growing up. My grandfather was a programmer, oddly. He was also one of the founders of the company called Cray Research, and that's just been part of my blood.

The funny thing is that this job combines something that's technical with something that was a big influence on my grandmother's side. She was a single mom of my father and my uncle. My uncle, who's mentally disabled, and in 1953 she started the first vocational education nonprofit for adults with developmental disabilities.

When you see someone working at, maybe, a restaurant or somewhere that have these developmental disabilities, my grandmother started this organizational opportunity workshop to help them find opportunities and live their lives in a meaningful way.

It was this combination of two huge influences, the nonprofit work and technology. For me, that's what was the appeal of getting into this. Now having said that, I do recall at the same time meeting my ‑‑ now ‑‑ wife for the first time, and having her ask me what I did for a living and say, "Well, I work at this nonprofit and it's technology. It's all open source. Everyone shares everything." The look of disappointment on her face was just palpable.

Since then, our organization, which I consider this supporting cast in open source and Linux and the projects we work on, has certainly grown. Much more importantly, open source as a way of collectively innovating and creating incredible technology has grown exponentially.

It's now part and parcel to how almost every technology, product, and service is built.

Gordon:  You started out in Linux. Really, what we've seen over the last number of years has been this morphing from everything's about Linux. Of course, there have been other projects, like Apache, and so forth for a long time.

Still, there was this centrality of Linux to everything. It's obviously still very important. You employ Linus, and so forth. The Linux Foundation and the industry more broadly has come to be about so many other forms of open source.

What was the point, as executive director of the Linux Foundation, did you come to realize or to start making a real effort to broaden the Linux Foundation to encompass all these other things?

Jim:  Yeah, it happened. Good timing and luck beats any grand strategy every time. It started when open source as, again, a more mainstream innovation platform, started taking off in the tech sector. I would have organization after organization come to me.

They didn't want to talk about Linux, they wanted to talk about the process about how open source works or what the legal frameworks were. They were no longer wondering whether or not Linux itself, as a technology, was good, or secure, or reliable, or scalable.

They were no longer concerned about whether or not open source was an important thing or of high value as a way to innovate. They wanted the specific playbook. They wanted the detailed instructions on "How do I take code, co‑develop it with, maybe, my competitors or my peers.

"What licenses should I choose? What do those legal licenses mean so that I can share effectively? How do I build an engineering organization that can work both internally and externally to my particular firm? How do I build an open source project with thousands of people? How do I make that scalable?"

They wanted that. Of course, Linux was the quintessential existence proof of good open source projects. Because of that, we started to say, "Hey, if we can take some of the best things around Linux, and the processes, and the methodologies and lend them to other technologies that would of super high value."

To some degree, we got dragged into it. Then over a number of years, we've just been improving how we help grow open source projects, whether it's Kubernetes, or Hyperledger, or Node.js, or others to create massive ecosystems around them.

In retrospect, Linux proves to be one of the more exceptional unique projects [laughs] in terms of how it's organized and how it's run, and so forth. We started, in terms of lending the best practices of Linux, to create these big ecosystems around different technologies.

Different open source projects have proven to be less and less of those practices and more of just borrowing from the great comments of the open source community in terms of how to run these.

Gordon:  What's, maybe, the biggest couple of lessons you think you've learned in the last 10 years or however many years, things that have surprised you, things that you didn't expect, things that caused you to revisit your assumptions?

Jim:  I mean, humility is something that you certainly have to have in this particular role myself. That's the personal lesson that I've learned over 10 years or more than that, I guess, about 15 years now. Is that being in the background leading through influence, being the supporting cast, letting people rise to the greatness that's in them through these great projects, and not taking credit for any that work, showcasing those people, whether it's a developer or an attorney who's moved the needle on convincing their firm to participate in open source in a big meaningful way.

It's the most important part of what I have learned is that at the foundation we have this saying, and it's part of our culture, of being humble, hopeful, and helpful. It's what we need to do. The humility is we're not the rock stars, or the folks who create all the value. It's developers and folks who invest in these communities and create incredible technology products and services from them.

The hopeful part is, I'll tell you, almost every project we start, many people tell me that it will never work and that we're doing everything wrong. [laughs] If you're not optimistic, it certainly can be very, very difficult.

Then the helpful part is just what we do. We're facilitators in bringing together now over a thousand organizations from all over the world and tens of thousands of developers to work on modernizing the world's mobile networks.

Or using open source technology to manage the Walmart's food supply chain, or creating an automotive system for 20 million production vehicles. We're not going to actually do that. Developers and companies, like Toyota, who make automobiles and roll them out by the millions are the ones who are responsible for that.

That lesson of humility, and optimism, and helpfulness is what I think is the most important one for me, at least.

Gordon:  Let's look forward. I've been going to these events for a long time. My impression is that for quite a few of those years you and others in the Linux Foundation almost felt a need to celebrate open source and Linux and send the message that, "Look, this stuff's really important, and these are all these great stats about it."

Your keynote yesterday was interesting because it was like, "We don't need to do that any longer, but we're not perfect. We're not there. We need to keep improving." You talked to the audience about where some of the areas you think open source still has work to do."

Jim:  Again, no one needs to be convinced these days that open source is doing great, although I will say I do like to indulge in talking about how great it is every now and then. It's part of the job.

What I showed yesterday, I mentioned that these are intentionally detailed slides that I'm showing. There's a lot of very detailed methodology behind the interplay of building a community that is a great upstream to a downstream industry that is taking this code and using it.

It doesn't necessarily have to be for‑profit, it can be governments. We saw the National Oceanic Atmospheric Administration, NOAA, today talking about how they're using open source to share big data to help with climate change and help with our oceans.

At the same time, we saw a commercial company Change Healthcare which manages two‑thirds of the medical claims in the United States using open source Hyperledger to make that process work.

We need a detailed way to say, "Upstream to create an ecosystem of knowledgeable developers from many diverse backgrounds solving a meaningful problem, sharing intellectual property, providing some form of consistency and conformance of those projects so that they can be downstream consumed in these wonderful ways. It's something that's detailed.

"We can improve upon in terms of the speed in which those ecosystems can grow. Then the pace at which they can be consumed and reinvested back in."

I was intentionally detailed because we can always get better at, for example, how do we create more secure code in upstream projects? How do we take the responsibility of this code being used in important systems that impact the privacy or the health of millions of individuals? These are always areas that we can improve in.

This event where you have the actual people who make the decisions about what code goes into what particular projects and the community leaders who are maintainers at these huge open source projects to get together and collaborate on how we can improve those things.

Again, whether it's cybersecurity, whether it's new licenses for sharing large data sets, whether it's ways to automate the sharing of AI models that have been trained for various purposes and can be reused effectively across different competitors or peers.

Those are all the things we can improve in. That was the detail that matters. All the rest is typical Zemlin hyperbole about how great open source is. People have seen that movie before.

Gordon:  Open source is interesting today. You have this loose confederation of companies that are working together, contributing to the commons individually, any of these companies could pull back and free ride on what others are doing. Arguably, some do more than they should.

How do you see this going forward? Is this a new type of business relationship, or has it always existed?

Jim:  I'll tell you over 10 years ago, a lot of my personal time, and a lot of time of our organization, was spent convincing organizations of the defeatism of free riding in open source projects.

We had written white papers on why it would be important to open source your device drivers for Linux, and showed the actual business value of having collectively maintained in open drivers as opposed to trying to maintain some random proprietary driver, and so forth.

We would explain the futility of just forking an open source project and not sharing your changes back so much to the degree to which you've defeated the whole purpose in terms of collective value. You're now, basically, supporting your own proprietary fork, whether or not it's open source. It doesn't matter at that point. No one understands it but you.

The epiphany that many companies have had over the last three to four years, in particular, has been, "Wow. If I have processes where I can bring code in, modify it for my purposes, and then, most importantly, share those changes back, those changes will be maintained over time.

"When I build my next project or a product, I should say, that project will be in line with, in a much more effective way, the products that I'm building.

"To get the value, it's not just consumed, it is to share back and that there's not some moral obligation, although I would argue that that's also important. There's an actual incredibly large business benefit to that as well." The industry has gotten that, and that's a big change.

Gordon:  Over a relatively short period of time, I would argue, at least, in the broad landscape this recognition that open source is not this hippy thing, but that it does deliver business value for companies.

Jim:  This morning it was such a fun session where we had Mark Russinovich, the CTO of Microsoft Azure, talking about how important open source is to the AI platforms they're building and how they're using open source to diagnose pneumonia in children.

You heard the least hippy, at least, from my perspective, company in the United States, Home Depot, talking about how important open source is, not just to building their tools to help automate the systems that help run Home Depot. But to hire developers that want to come work at Home Depot, if they're seen as an open source company.

NOAA, the National Oceanic Atmospheric Administration is just a very old and non‑hippy, very conservative organization. You hear them talking about how important open source is for them and how important it is for allowing them to share the petabytes of data that they have with the world.

We've thoroughly crossed over into the non‑hippy part, although I will say there is a special place in my heart that will live forever for the antiestablishment sensibility that is represented in the open source movement ‑‑ the questioning of people's assumptions, the demand for sharing.

All of those things, those iconoclastic and what some people would consider antiestablishment sensibilities are now mainstream. Maybe, the world's a little better off with some of that hippiness in the mainstream now.

Podcast: Containers and Kubernetes with Chris Aniszczyk

Chris Aniszczyk Cloud Native Foundation

Chris Aniszczyk is the Executive Director of the Open Container Initiative. In this podcast, recorded at the Open Source Leadership Summit in 2018, Chris speaks to me about the role of container standardization, what's coming next with the OCI, and how open source collaboration changes at scale. Also on this podcast, Kubernetes graduates in the Cloud Native Computing Foundation (CNCF), how companies like Red Hat create products such as OpenShift using projects under the CNCF and elsewhere, and the need to tailor approaches for individual open source communities.

Listen to podcast MP3 [00:19:17]

Listen to podcast OGG [00:19:17]

[Transcript]

Gordon Haff: We talked at the last Open Source Leadership Summit. We're going to review a little bit about what has changed since then, where things stand right now, but I'm going to spend most of this podcast talking with Chris about the role that OCI plays and the broader landscape in how the sorts of things that the Open Container Initiative does plays into open source more broadly.

Chris Aniszczyk: I can give a little bit of an update about where we are with OCI. For folks who aren't familiar, we founded OCI a little over two and a half years ago with the express purpose of bringing some very basic, minimal container standards to prevent an issue at the time where people feared that we would have multiple competing container run‑times that were diverging.

If you built a container for one system and tried to move clouds or something, you'd have to rebuild everything. Essentially, it would kill portability, it would kill the whole momentum around us pushing forward cloud‑native and container‑computing forwards.

The OCI was founded with Docker, Core OS, Red Hat, Microsoft, a lot of the major cloud providers, AWS, Google, and so on, to build these minimal standards. We've, in my opinion, been very successful. It's taken a little while, but mid‑last year, we had our first 1.0 release.

It's been great to see that finally happen, which basically blessed two specific ‑‑ we call them "specifications." We are somewhat of a standards body, but more of a modern standards body, is the best way I like to describe it. We're very much a code‑first organization, when it comes to things with trailing specs.

We coalesced around two specifications. One was the run‑time, which is how do you execute a start‑stop lifecycle of a container, and then the image spec, which is the underlying image format of how a container is packaged.

We hit 1.0, and every major cloud provider and any company that is running containers is adhering and taking advantage of the OCI specification. It's been a great journey to see the adoption happen naturally. I'm very stoked to see that.

In terms of what's going on for this year, recently we just announced...Every year we have elections around our Technical Oversight Board. The OCI TOB is responsible for adding projects, making any crazy changes in directions, and so on.

We had a bit of a shuffling, and now, of those nine members, we have folks from Red Hat, Microsoft, Docker, IBM, Google, and even the honorable Greg KH from the kernel team are on the Technical Oversight Board. It's great to mix of folks there.

We're also adding some new members in the coming weeks. Some different cloud providers in China and a container project called Kata Containers is joining. We'll be adhering and supporting OCI within that project.

In terms of what's next, let's see. There's a lot of discussion in the OCI community to add something called a "distribution specification." Now that we've nailed the run‑time and image bits, how do we fetch containers? How do we distribute them?

There's a lot of different container registries out there. They all have similar APIs, but there's been a bit of incompatibility between them. The community is proposing to take parts of the Docker V2 registry API spec that's already there, make some improvements, get the final input from the community, and bless that as a specification.

Gordon:  Thanks. That was great rundown of where we are. I'm going to take things up maybe 20,000 feet now and talk about the philosophy here and how it pervades open source more broadly.

You talked about the specific motivations behind starting OCI, but let's take that up a level about what's the general problem in open source that OCI was trying to address that may exist in other places, as well?

Chris: If you saw Jim's keynote today, the whole interest around open source collaboration sustainability. When you're dealing with a lot of different companies, a lot of mutual self‑interests, you do need a neutral setting where they can all come together, collaborate in software, and agree to a set of rules and ensure that there is a fair playing field for everyone.

OCI, and even other open source foundations under the Linux Foundation like CNCF, help guarantee that for projects and companies. That there is a fair playing field, so one company doesn't take advantage of another.

It's business at the end of the day. It's all mutual self‑interest, everyone's out for themselves. Foundations like OCI exist to ensure that things are fair and that projects are supported in a neutral way. I think that's a problem space that OCI and other efforts apply in the Linux Foundation.

Gordon:  In some ways, these are, at least, potential problems that are growing out of scale and out of success, that when open source was a fairly small thing and a relatively small set of communities, everyone knew each other, and this kind of thing wasn't as big a problem.

At one level, open source, generally, has certainly helped. You'll never solve the collaboration problem, people are people. It's certainly helped the collaboration problem in general, but it seems as if we're seeing that there needs to be more process around collaboration, open source, that things like OCI can address.

Chris:  To me, it's all about fairness. Humans and businesses innately want things to be fair and will call things out otherwise. Foundations and efforts like the OCI help establish those rules, so companies feel comfortable collaborating. It's all about establishing that initial level of trust. If people and companies trust each other within the projects, good work will happen.

The other thing that Jim also alluded a little bit today in his keynote was open source has done extremely well. We've come a long way since those Linux days, "Microsoft's the evil empire," all those fun jokes you had. Microsoft's a huge adopter of open source now. It's incredible to see how the company has changed.

We have this cool effort within the Linux Foundation called The Automotive Grade Linux, where a bunch of companies ‑‑ Toyota and so on ‑‑ are getting together to collaborate, to bring open source to vehicles and cars. It's a whole new industry being impacted by open source.

We're going to see other let's call them "industry verticals," for lack of a better word, starting to embrace and take advantage of open source. I spend a small portion of my job helping companies, through the Linux Foundation, come and learn how to build an open source program or how to build an open source strategy for the company.

I'm seeing industries out of left field, that you're like, "Why is film interested in open source? Why is pharma?" They really want to do something with open source?

It's just incredible to see that interest in other verticals out there, where companies are like, "Look, let's find ways to collaborate." A lot of the stuff that we do isn't how we're generating our business value, it's some commodity that we need to get our business done. Let's collaborate on that and then focus on the business value.

That's the trend I think you'll continue to see in the future.

Gordon:  This seems to be the really dramatic change over the last few years. I'm writing this book on open source. Historically, there was so much attention being paid to things like licenses, to things like being able to view source code, to having free distribution, that kind of thing. Originally, there wasn't much attention paid to this coordination and collaboration.

Chris:  Licensing and all that is table stakes. That's a requirement to get the gears going for collaboration, but there's whole aspects around values, coordination, governance. One of the hacks that we came up with ‑‑ at least, in the Cloud Native Computing Foundation, in CNCF ‑‑ was not to prescribe one explicit governance model for projects.

We give them the choice to craft their own, as long as it's transparent and fair. I think that's the reality, that each open source project is different. They have their own unique needs. Like the Tolstoy principal for open source, "Each project is unhappy in its unique own way." You trying to prescribe one way for them to solve things, potentially leads to ruin, in my opinion.

I think the whole custom crafting of governance and allowing projects to evolve that over time as they grow has been a great thing, a lesson learned for us.

Gordon:  Looking forward, the OCI, this idea of creating a standard syscall layer, if you would, for other aspects of the open source world, where are some things, if you look out there, that there's benefits to a similar type of approach?

Chris:  That's a great question. Obviously, in OCI I mentioned the distribution of containers as a specific problem that we want to standardize, and within the OCI context further than that. I'm not sure where things will go. There's been discussions around maybe standardizing the way containers are built. There's many different ways to do that.

Outside of OCI, I don't know. I think there's been [laughs] many efforts within this industry to standardize on certain things. It's a mixed road of failure and successes. Some of us may remember the LSB ‑‑ [laughs] Linux Standard Base ‑‑ and where that went.

The idea was noble, but sometimes these things don't necessarily succeed. I'm sure we'll see other efforts, at least in the Linux Foundation, around this.

Gordon:  I think, as maybe you've alluded to, we do have this idea of standards coming out of code, rather than establishing standards these days.

Chris:  To me, that's probably the biggest change. The archetype of the traditional standards organization, where you have a bunch of architects locked in a room, drawing pictures and diagrams. I think that's dead.

I think what we're going to transition to is, where we're going to go, to these organizations, where they're like OCI. They do have the traditional model that exists in a standards organization, but they are a code‑first organization.

There is no cabal meeting of architects drawing pictures and then sending this message from high‑up high, "Go implement this." That's just not how it works anymore. I think standards organizations are realizing this, and I think OCI is a trendsetter in setting how a modern standards organization should work.

Gordon:  In one of the keynotes this morning, we talked about Kubernetes and the idea that the extensibility points really evolve out of the code.

Chris:  Kubernetes has evolved significantly. They've evolved their governance. Their values have shifted a little bit. I give them a lot of credit, they have listened to their end‑users and community, more so than probably any other project that I've interact with. They've been very thoughtful, and they should be very proud of graduating today, which was another big announcement.

Gordon:  I confess to, given how many customers [laughs] that are getting Kubernetes in production today, I'm like, "What? They hadn't graduated?"

Chris:  I know. It's funny, but you have to remember, both OCI and CNCF, they're about two and a half years old. When we started CNCF, we didn't have a Technical Board. We didn't have the TOC, we had to bootstrap that. When the TOC was established six months later, "OK, we need a development process," and that takes a while.

We're constantly evolving these things. I would say about a year ago, we settled on a graduation and development process, and we're like, "OK, it's there. Let's see how projects will progress before we start asking projects to graduate."

We eventually reached out to Kubernetes, once they recently established their steering committee ‑‑ we were waiting for that ‑‑ and they decided that, "Yeah, things look great! We should formally apply to graduate." The TOC gladly accepted it, voted, and approved.

I think it's a great sign, reflects on the maturity of where CNCF as an organization. That we have a good graduation and development process, but it took a while to make that. We had to create this stuff on our own. We definitely learned from other foundations out there, but we had to create it on our own and put our own spin on it.

Gordon:  I think a lot of people out there would like to hear, "Here's the playbook. Here's how you do open source," and it just doesn't work that way.

Chris:  I wish there was a playbook. From my experience at the Linux Foundation, like I said, each open source project is unhappy in its own way. Creating custom governance, custom bylaws, and how these communities coordinate and interact? It's what the Linux Foundation does. You cannot just have one way to do things.

There are other successful efforts out there, like Apache and the Apache way. I've been involved in the Eclipse Foundation, there was the Eclipse way and development process. To me it's just hard to prescribe one specific solution for all projects to adopt, because every project has its own unique needs, depending on the business and users and so on.

Our work at the Linux Foundation is all catering to those needs and providing the best possible solution for them.

Gordon:  You're saying your work is never done? [laughs] That's probably a good end.

Chris:  We're busy. We're extremely busy. For me, it's interesting, because I wear multiple hats at the LF. Working on the OCI, that's a very constrained focus base. CNCF, obviously, we have grown and expanded from just one project, Kubernetes, to now 16, and ever‑growing.

It reflects how Cloud Native has taken the industry by storm. People really want to take advantage of some of the lessons learned from Google and other Internet‑scale giants, because they themselves are trying to take advantage of the needs or they're becoming software companies themselves.

We had discussions with the Automotive Grade Linux folks like, "Great! They're working together on packaging software in cars and the integrated console and work together." They're like, "Holy crap, we're going to need a cloud to back this. Where are we going to go for this?"

I'm like, "The CNCF, the Cloud Native folks, they've figured this out, so why don't we leverage that?" To me, it's just great to see that level of collaboration and interest.

Gordon:  The CNCF, over the last year, is pretty amazing. I did a podcast with Dan Kohn, the Executive Director, at Open Source Leadership Summit last year. At the time, Kubernetes was obviously in there, and I think Prometheus had just been established. That was it.

Chris: I know. We had just two projects, a barely‑formed Technical Board and process, and it's amazing to see. Today's, like I said, a very special day for a lot of us who have been involved with the foundation in the beginning, including those folks on the TOC.

It's great to see Kubernetes graduate. I'm excited to kick off a vote soon for Prometheus, which was our second project. They're a very important project within the ecosystem. It integrates not only with Kubernetes, but to be honest, many people use Prometheus, even without Kubernetes, for all sorts of things.

It's a very useful tool in the Cloud Native space, but it is also a very mature project. [laughs]

Gordon:  Probably should wrap up, but projects within CNCF, among other places, are such a great example of how open source creates this environment where someone can come up with a new, distributive tracing tool ‑‑ logging, monitoring, whatever ‑‑ and that stuff can all work together very modularly.

Chris:  Absolutely. One of the key lessons we learned when we were starting CNCF was we didn't want to force the integration. If you look at the new interactive landscape and the cute little trail map we launched today, the whole point of that is to state that each project independently works on its own.

Some companies, like Red Hat, are free to integrate them and build a cool product like OpenShift with, but in general, there is no forced integration. The integration happens by our members building products and serving end‑users. There is no forced train or thing to say, "All projects must work together."

The key hack for us was, let the end‑users and members build useful things for end‑use and have that dictate how things are integrated and work together. I think it's working out.

Sunday, February 25, 2018

Book Review: We Were Yahoo!

Overall: 3 out of 5

About two-thirds through We Were Yahoo!, author Jeremy Ring writes the following: "Is Yahoo! a media company or a technology company? The company could never agree on this central question. There wasn’t a single CEO in the history of the organization who effectively directed the company toward a single strategy.”

That’s a good insight. There are others about how the Internet was transforming advertising during the dot-com boom of the late 1990s; Ring opened the first east coast office of Yahoo and oversaw the creation of sales programs as Senior Director of Sales Programs. And about some of the challenges caused by trying to merge technology and media cultures.

However, I can’t really recommend the book overall.

It’s just not very well written or edited. It skips around, repeats, and uses strained metaphors. It’s almost as if there are several chunks of different books here. There’s the book about what it was like during the dot-com phase of Yahoo when the stock was headed into the stratosphere. There are the ruminations about all the coulda’, woulda’, shoulda’s in Yahoo’s past. Shoulda’ been Goggle. Coulda’ been Facebook. There’s a fairly bizarre personal story that’s more or less a complete tangent. 

The book also lacks any particular payoff for being told by an insider. The author’s time at Yahoo mostly gets dealt with in a couple of chapters. And, other than some accounts of dot-com euphoria and differences between how Yahoo and traditional media worked, the insider insights are both thin and scattered. In addition to being told in a rather disorganized way, the post dot-bomb history is told mostly from the perspective of an outsider. It doesn’t really square with the title.

Fundamentally, this book just lacks a strong narrative flow. For example, issues of sales force organization early-on pop up in a discussion late in the book about Yahoo’s ultimate failure in search. To the degree that the book has interesting content, it’s just tough to get through it and connect to a broader storyline because it so often just skips from one year or one argument to another. 

Tuesday, February 13, 2018

Podcast: Diane Mueller on evolving communities and OpenShift Commons

Dianemueller 1378481891 37

Diane Mueller is the community manager for OpenShift Origin, a Caas and PaaS platform for cloud-native application development, deployment, and operations. In this podcast, she discusses how communities like the OpenShift Commons are evolving from groups that were singularly focused on code contributions to ones that focus increasingly on users and contributors in other areas.

Listen to MP3 [20:52]

Listen to OGG [20:52]

[TRANSCRIPT]

 Gordon Haff: For the first time in way too long, I am here with Diane Mueller, who runs community development for Red Hat OpenShift. Diane's been spending a lot of time over the last year thinking about how communities should be built and how they should be allowed to evolve.

As a result, I think she has some interesting things to say about how open-source communities in general are evolving. Maybe we can start with a little bit of context of where you're coming from. We just had another very successful OpenShift Commons Gathering. What is OpenShift Commons and where we are right now?

Diane Mueller: I'm the director of community development -- which means nothing to anybody -- for OpenShift.

Basically, I've been in the open source world for almost 20 years now and worked on lots of different open standards, open source projects, and done a lot of thinking about what it is to develop a community that will sustain an open source project or move a standard forward in adoption.

Thinking about what it really takes to create -- as someone famous once said, the village that it takes to raise a child -- the village that it takes to create a global ecosystem that supports and sustains people using your project.

We hear a lot about trying to grab code contributions for an individual project and grow the maintainers of a project.

I've learned over the past four or five years working at Red Hat about a lot of different open-source community models. With OpenShift, we had some really interesting things happen that forced us to really open up our ideas about what it is to make a community that will sustain a project for the long haul. A lot of it was about collaboration with upstream projects.

What we did about two years ago, we pivoted the whole underpinnings of OpenShift to work with the Kubernetes community. If you don't know Kubernetes, Google it, find out about it -- cluster management and a whole lot more at scale for clouds. It is the underpinnings now, along with a lot of other open projects for OpenShift Origin, which is the project that I manage for Red Hat.

What happened when we did that pivot was two things. One, we pivoted and we had an existing user base, so we had lots of people we had to educate about the redirection of our architecture and our project, and how to use it with new tools, new pods and a different approach to containers. All kinds of stuff.

We had this fire-hose of information that we had to get out there to people who were already using it. Then we had a whole new community of people -- the Kubernetes community and others -- that we were trying to figure out how to collaborate with.

Now, rather than just trying to get people to contribute to Origin, we were contributing back to upstream projects that were integral to our project's lifecycle, and had to keep in-sync with other projects, how to collaborate with Docker, then the OCI and all the other container standards.

Tons of other projects for monitoring within the CNCF: Prometheus, Grafeas, and other projects that are out there. We had to create a new model. That model we named -- we had to give it some sort of name -- we called it the Commons because Red Hat's near Boston, and I'm from that area.

Boston Common is a shared resource, the grass where you bring your cows to graze and you have your farmer's hipster market or whatever it is today that they do on Boston Commons besides protests and wonderful things, but it's also right next to City Hall and all the state government stuff.

The governance and all of the other pieces of things that threaded together with the concept of common, so we created something called the OpenShift Commons. What we tried to do was open up our minds about what the community constituted.

There's been lots of other examples of people doing similar things where we reached out to the upstream communities, to -- we didn't ignore the contributors to our code base, because we love them -- to the service providers who were building infrastructure that hosted OpenShift.

We have AWS, Google, VMware, and a bazillion other cloud-hosting providers that are trying to deploy OpenShift and make managed service offerings of them. They constituted a whole lot of really good feedback besides the stuff that we're using ourselves, hosting OpenShift online originally, and then OpenShift dedicated in addition, and openshift.io, lots of things ourselves.

We learned a lot, and got a lot of feedback from there. Also all of the ISVs, all the consultants, all the database folks from Crunchy Data to Tigera. All kinds of other people who were trying to work with us, but didn't have a way in because, in the old model of open-source community management, you were only looking for those code contributors.

You only really talked to people when they were those people that were giving you code. We tried to flip this all on its head.

In addition to all these people who were adding services, or providing infrastructure, or working with us on this, there was that whole other community out there, the customers, the people who were actually deploying OpenShift Origin or deploying OpenShift Container Platform. How do we get their feedback back to the contributors, the engineers, the service providers on this topic?

What we did was try and create a new model, a new ecosystem that incorporated all of those different parties, and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. The PMs don't scale that are working at Red Hat.

We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.

If you go on our Slack channel for Commons at OpenShift, you'll see a lot of people talking to each other that are not Red Haters, giving support to each other and peers. A lot of it was about creating this peer-to-peer network model, wherein Red Hat got out of the way so the conversations could be common between Amadeus and a Clydesdale Bank or someone else.

It had different interesting aspects. We were trying to create and use all those tools to do that, as well as we realized we couldn't just be virtual. We're here in London and we just came through a day of what we called OpenShift gathering -- which is not like a Red Hat Summit, which is huge -- or a meet up, which is just a two-hour thing.

A gathering where we all come together, like on the Commons. We have conversations. We have panels of like people talking about stuff, panels of disparate people from different open source projects. We get updates from different upstream projects.

There's lots of stuff that we do try to make that virtual world work, because I think you do need the people connections.

As soon as I stopped trying to attract contributors to my project, we went from...I think we had five external organizations contributing when I started on this project to OpenShift Origin.
We've gone from 5 organizations to 70. That's huge in two years' time. That's a huge growth spurt. It just shows that by giving people a voice, and making a space for people from different parts of our community, and different parts of our ecosystem, that actually drove code contribution.

I think we have to break the model of what we say is open-source community. We need new rules for revolutionaries. We need new open-source revolutionaries, and we need an evolution in how we think about who is part of the community. That's really what we've tried to do. What I've tried to do over the past couple of years is give the podium away.

Gordon: I'd like to take things up a level or abstract things away a level. We love abstracting things in computer science.

You've been talking about things that have been done specifically in the context of OpenShift. I'd like to probe about generalizing this. Why do you think things are different? Why has this been a good model for OpenShift, whereas it's not something that we've necessarily really seen in most other open source projects? Has the world changed? Is OpenShift different in some ways? Why?

Diane: We don't have enough time for me to rant completely, but I think, in some ways, in the old world, we would've thrown our code into a foundation. There's a lot of room for people and foundations for helping grow and incubate projects. Because of the pivot that we did to Kubernetes, we were forced to do something different.

In doing that and in breaking the model, or the rules, for what an open-source community is, we started finding a new framework. I think the framework is a more inclusive, diverse community and it allows us to really drive innovation.

If you abstract what we've done for OpenShift and apply it to any other open-source project -- and maybe not one that's in incubation -- maybe some of things that have come out of OpenShift that we're slowly incubating into other projects or moving into Kubernetes functions and features from OpenShift.

I think if you abstract what we've done, you can apply it to any existing open-source community. The foundations still, in some ways, play a nice role for giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways.

I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away from, say, an enterprise like Red Hat that's pushing something so that we don't have a one-way street, or that we don't always have to play the mediator in any conversation.

What I'm trying to do is break down the barriers between the different people in the network and really try and help people make the connections across the community. To do that, the other secret ingredient in this model is there isn't any anonymity. There's a couple of rants I've done somewhere online about it.

I love GitHub, but everybody who signs up for GitHub pretty much uses a Gmail account or some super-secret email that I can't figure out. A lot of them flag their GitHub with their affiliations, which is great. Then companies like Bitergia or Stackanalytics scrape that, and we can figure out what organization they're from.

I think, in the terms of community, in order to have a real trusted peer-to-peer network, you've got to know who you're talking to. The other aspect of these community models is that we ask people to really be clear about who their corporate masters are, who they're working for.

That doesn't mean that's your agenda, but it does mean that, if I'm working at some big financial institution, I know that I'm talking to another big financial institution and there may be rules that apply then I need to worry about -- privacy and things like that. Also, I can learn lots of things from people outside of my spectrum, my normal peer-to-peer network. That's really been very helpful.

If you took this, it's not OpenShift specific. I think what it is, is more these are the lessons. This is the framework we've had with the Commons model, is what I call it. You can brand it any way you want. I think the idea of having these shared resources, whether they are virtual Slack mailing lists, and having the lack of anonymity, and the ability to...

Here in London, one of the Commons' members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons' members.

When you let go being the absolute owner of a project...Not that Red Hat has let go of OpenShift or anything. I'm not saying that, but if you let go and let in other people into the community and have them have a say, and have a voice, and an ability to be recognized in lots of different ways.

We always talk in open-source about one of the best ways to get in is documentation, or log an issue, or do a pull request, or something that... but there's a lot more than GitHub-centric little pull requests and issues. Those are good. Don't get me wrong.

I think maybe when I talk about SIGs, Special Interest Groups, the distinction we make with OpenShift SIGs is that they really are about sharing best practices, and lessons learned, and what's in your stack that you're running. Maybe your machine learning on OpenShift is, "Tell us what all the tools that you're doing. Share that with your peers," versus, say, Kubernetes' SIG, which is, "Tell me how I'm going to get Cluster Federation working," or, "How am I going to do service catalog work and contribute to that?" There's this conversational level of community that has to be somewhere it has to grow from. It has to be nurtured.

I think that's the role today in community development that I'm espousing, is nurturing those conversations.

Gordon: You mentioned Kubernetes a few times. You mentioned some things like OCI (Open Container Initiative), CNCF (Cloud Native Computing Foundation)n. I've done a number of podcasts with various executive directors and other project heads within those foundations.

Obviously, OpenShift touches many of these things. Kubernetes, of course, but you've also mentioned Prometheus, and then there is Istio in the service mesh area, and a lot of other things. How do you think about and how do you interact and work with those foundations, which are often structured in a somewhat different way from OpenShift Commons?

Diane: I think the role of community management, or community development is to create those connections and to make sure that the updates from those different foundations make it, and unfiltered through, and get connected to all of the different pieces and parts. That's, in some ways, what the briefings are we do.

We get people to talk from different projects, different foundations, different aspects of the community and make that information available in some ways. The foundations, like I said before, are really great around governance and incubation. What we're trying to do is create the conversations for cross-community collaboration. That's really the connective tissue of communities.

That's where communities really help drive, and the collaboration that we do with these, drive the innovation back into OpenShift and into the people who use OpenShift into their practices as well, into their enterprises, and their uses of OpenShift, and the tooling that they're building on top of it.

Gordon: I promise I won't share with your manager, but what are your goals and plans for next year?

Diane: [laughs] Oh, geez. I think, we just did the metrics thing. I think I hate metrics, but yeah, shh, don't tell my boss. More face-to-face time. More gatherings, more regional gatherings. We did one in London. We'll be doing something in Copenhagen before KubeCon and something at Red Hat Summit. More customer stories.

When I say customer stories, it's not the Red Hat customers. It's people who are using the different pieces and parts of our ecosystem to get them to tell us what their full stack is. What are they using?

When I ask someone to tell me to share their story, their lessons learned, OpenShift may be just a component of say -- I'm really hooked on ML and AI right now -- might be just part of they're doing TensorFlow with JupyterHub and maybe they're off on the Kubeflow tangent or maybe they're using Spark or something from the radanalytics team, trying to tease out the entire stack conversation.

I see a lot more of that happening this year. I think we've hit the tipping point of where we're trying to teach people what Kubernetes is. We've done that, last year. We still got some more of that to do because it keeps evolving every three months. This year I see it as the year of workloads on OpenShift. What are we doing? Getting more of those stories out there.

The OpenShift Commons gathering at Summit will be almost entirely case studies. Users talking about what's in their stack. What lessons did they learn? What the best practices are? Sharing those ideas that they've done just like we did here in London. There were some great stories here. Wait for the videos for that.

Metrics, I probably will double the size of the number of organizations in the OpenShift Commons. That's really what's we're seeing now, is just really rapid. I think I said it at the London event, but in the past hundred days we've had over 40 organizations join the OpenShift Commons, which is phenomenal.

It's not like I'm going out and recruiting these people. They just are naturally finally finding the commons.openshift.org website buried under all the other Red Hat OpenShift properties there. I really encourage you, if you want to share your stories and meet your peers to come to commons.openshift.org, fill out the join form.

You'll get an email from me and all my contact information. Maybe not my home phone number, but everything else. I think growing the community of people who have production deployments and adding more of those is really the big deal this year. Track of everything that's going on in the upstream, too. Lots of that stuff going on.

Gordon: Thank you, Diane. That's a good note to close on I think.

Diane: Thank you very much. It's time for more coffee.

Thursday, January 11, 2018

When companies focus too much on risk

When we think about security in the context of DevSecOps, an important mantra is that we need to move from thinking about providing absolute security to thinking about managing risk in the context of business outcomes. Move from “Just say no” to saying yes to small risks if the tradeoffs appear to be worth it.

Let me illustrate this principle (in addition to a couple of other things) with an example that’s not drawn from the IT world. 

Right before the holidays, I took a last minute quick trip to speak at and attend a couple of events being held next to the airport outside San Francisco. Loaded the bags up and off I went. As I was being dropped off at the airport, I pull out my driver’s license so I won’t be fumbling around with my wallet, get out of the car, and head into the terminal.

Somehow, in the course of 50 feet, space aliens made off with my license. Call the limo company. Driver takes a look. No luck. I still have absolutely no idea what happened. 

Now, normally, frequent traveler me has a travel folio with passport, spare credit cards, cash, and other potentially useful travel backups. But because this was just a quick trip I figured I didn’t need it.

Lesson #1: You may not think you need a backup. Until you do.

(See also. It’s just a small code change. We don’t need to re-run the test suite.)

Crap. Visions of my trip mashed up with mushroom clouds seemed appropriate. But I wandered over to the security line anyway.

Much to my surprise, my missing license turned out not to be a particularly serious problem. Yes, I had other ID although nothing government issued. I had my boarding pass on my phone. I have TSA Pre. And they gave me a thorough pat down and they inspected and detected my luggage very carefully. I was both impressed and surprised that I was able to hop on my flight.

I thought I had dodged a bullet.

Land SFO. Take shuttle bus to hotel. I won’t name the hotel. Let’s just say it’s a lower end chain I wouldn’t normally stay at but, as I said, this was a very last minute trip and with my usual chains either sold out or going for $700 a night I figured I could put up with the relative dump for a couple of nights.

They have my reservation that I made online. Give them my credit card.

“ID please."

I tell my story. Consternation. “Umm, do you have a passport?"

Well, no. But I can show you any number of cards. Here’s my company badge with a photo. You can easily look me up online. 

Nope. It was starting to look as if I’d have to start dialing various friends in the Bay area to see if they had a spare couch I could use.

At this point, what I really wanted to say was: “Look. If I wanted to concoct some complicated scam for free hotel nights that somehow involved having 1.) an online reservation, 2.) a wallet full of cards including the credit card used to make the reservation, 3.) an official looking company ID, but 4.) no government-issued photo ID, I’m pretty sure it would be at an exotic resort and not an SFO fleabag."

To bring us back to the original topic, sure, you can always impose more hard and fast rules but you really need to think about whether inflexibly imposing those rules is the best approach for the business. 

Lesson #2: Think about whether potential risks justify the costs of eliminating them (which you can never fully do anyway)

In the end, I was able to check in. I didn’t say what I was thinking and we reached an agreement whereby I could pay cash, including a security deposit. (Fortunately, the dollar amount was small enough that I was able to withdraw what I needed from the ATM in the lobby.) Luckily, I did have my company ID with a photo; I don’t think they’d have let me stay with no photo ID at all—my face being all over the Web notwithstanding. 

So I do give some small amount of credit to the local manager for bending, however slightly, to what I have to assume are quite rigid corporate rules.

Lesson #3: Empower employees to do the right thing as much as possible

I was also pleasantly surprised how easy and relatively inexpensive ($25) it was to replace my driver’s license on the Massachusetts DMV site. Which brings us to our last lesson.

Lesson #4: If your policies and customer experience fail to meet the standards set by both the TSA and the Massachusetts DMV, I’m pretty sure you’re doing something wrong

 

Podcast: Talking Kubernetes community at CloudNativeCon

Wrapping up the week at CloudNativeCon, I sat down with Google’s Paris Pittman, Heptio’s Jorge Castro, and Microsoft’s Jaice Singer DuMars to talk about their roles as Kubernetes community leads. Kubernetes has become so successful in large part because of the strength of its community. In this podcast, we talk about mentorship, getting involved, and being a welcoming community. 

Listen to the MP3 [26:56]

Listen to the OGG [26:56]

 

 

 

Thursday, January 04, 2018

Podcast: HashiCorp's Armon Dadgar on "secret sprawl" and Vault

5gFsC5pv 400x400

HashiCorp co-founder and CTO Armon Dadgar and I recorded this podcast at CloudNativeCon in Austin. In this podcast, we talk about the problem of secrets management, the changing nature of threats, the need to be secure by default, HashiCorp's Vault project, and Vault on Red Hat’s OpenShift.

The Vault project

OpenShift blog post on Vault integration

Listen to MP3 [17:40]

Listen to OGG [17:40]

Wednesday, January 03, 2018

Podcast: Heptio's Joe Beda talks Kubernetes

Leader beda 168x168

Heptio's CTO, Joe Beda, made the first public commit to Kubernetes. In this podcast he talks about ark (an open source project for Kubernetes disaster recovery), what made Kubernetes take off, why companies are moving so quickly on cloud-native, and where Kubernetes is headed.

From Joe’s perspective, companies realize that they’re at an inflection point and they have a sense of urgency about how they need to move quicker than in the past. That’s one of the factors that have driven container adoption at a faster pace than, say, virtualization even though the latter was arguably less disruptive to existing processes and infrastructure.

The next phase will be making the most effective use of Kubernetes clusters once they’re in place. Integrating them with other systems. Delivering value to customers on top of them. 

  • ark, a utility for managing disaster recover of Kubernetes clusters from Heptio, as discussed on the podcast

Listen to podcast in MP3 [12:42]

Listen to podcast in OGG [12:42]