Thursday, November 20, 2014

Links for 11-20-2014

Open source accelerating the pace of software

When we talk about the innovation that communities bring to open source software, we often focus on how open source enables contributions and collaboration within communities. More contributors, collaborating with less friction.

However, as new computing architectures and approaches rapidly evolve for cloud computing, for big data, for the Internet of Things (IoT), it's also becoming evident that the open source development model is extremely powerful because of the manner in which it allows innovations from multiple sources to be recombined and remixed in powerful ways.

Read the rest on opensource.com...

Wednesday, November 05, 2014

Slides: What manufacturing can teach us about DevOps


What manufacturing teaches about DevOps from ghaff

Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In this session, Gordon Haff discusses the many lessons and processes that DevOps can learn from manufacturing and the assembly line-like tools, such as Platform-as-a-Service, that provide the necessary abstraction and automation to make industrialized DevOps possible.

The core product matters

From a post-election comment.

[Ref:] 8. The Democratic ground game is losing ground.That was one I always doubted. There were lots of stories about how good the Democratic ground game was. Also a lot of related stories about how good the Democratic analytic folks are, and how good the Democrats were with social media.

But having worked on large analytics projects for retailers most of the last 30 years, the Democratic stories sound really similar to those I've heard in the business world. When something works lots of people want to claim credit for it working. Getting credit for a successful piece of a campaign gives a consultant the ability to charge higher rates for years. And it doesn't much matter if that campaign was to elect a candidate or to get people to visit a store.

The reality is when there's a good product that people want the marketing is easy. And analytic tweaks to the marketing message are at best of marginal value. When people don't want the product the marketing and analytics won't save it. President Obama had lots of people who were proud of him being the first black president. They were easy to get to the polls. It didn't take a great ground game, great analytics people, or an inspired social media presence. It just worked.

A lot of effort goes into marginal things. Product names, lots of branding details, or focus on insane detail that isn’t even “on the screen.” It does add up. Or it’s an inherent part of an overall mindset or approach that can’t be divorced from what is on the screen. 

But blocking and tackling is usually most evident when it’s absent or deeply flawed. Suspicion is probably warranted when extraordinary claims are made for results stemming from optimizations made far outside the core product.

Reducing the Wisdom of Crowds

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

I find this an interesting point from Nate Silver over at FiveThirtyEight. I think I’ve seen something similar in Oscar contest data that I’ve analyzed. It wasn’t unequivocal but:

The trend lines do seem to be getting closer over time. I suspect... we're seeing that carefully-considered predictions are increasingly informed by the general online wisdom. The result is that Consensus in the contest starts to closely parallel the wisdom of the Internet because that's the source so many people entering the contest use. And those people who do the best in the contest over time? They lean heavily on the same sources of information too. There's increasingly a sort of universal meta-consensus from which no one seriously trying to optimize their score can afford to stray too far.

There are some fancy statistical terms for some of this but fundamentally what’s happening is that information availability, aggregation, and (frankly) the demonstrated success of aggregating in many cases tend to drown out genuine individual insights. 

Tuesday, November 04, 2014

Links for 11-04-2014

Podcast: Docker, Kubernetes, and App Packaging with Red Hat's Mark Lamourine

My colleague Mark has been doing lots of work on Docker, managing Docker across multiple hosts, and integrating Docker with other functionality such as that provided through the Pulp content repository. Mark sat down with me to talk about some of his recent experiences working with these technologies.

Mark’s blog which drills into many of the things we discuss here
The server virtualization landscape, circa 2007

Listen to MP3 (0:18:30)
Listen to OGG (0:18:30)

[Transcript]

Gordon Haff:  Hi everyone. This is Gordon Haff with Red Hat, and today I'm sitting here with one of my colleagues, Mark Lamourine, who's also in the cloud product strategy group. Mark's been doing a bunch of work with Docker, containers, Kubernetes, figuring out how all this stuff works together. I decided to sit Mark down, and we're going to talk about this. Welcome, Mark.
Mark Lamourine:  Hello.
Gordon:  As a way of kicking things off, one of the things that struck me as interesting with this whole Docker application packaging business is that you go back a few years...and I was looking at all these different virtualization types. I'll try to find some links to stick in the show notes. One of the interesting things when you're looking around the mid‑2000s or so is there was all this work going on with what they were calling application virtualization.
This idea of being able to package up applications within a copy of an operating system in a way that really brought all the dependencies of an application together so that you didn't have conflicts between applications or missing parts and that kind of thing. One of the things that I find interesting around Docker now is it's, at its heart, using operating system containers as another form of virtualization.
What Docker's really done is it's added some capabilities that solve some of the same problems that application virtualization was trying to introduce.
Mark:  One of the things about much of this technology that people have observed is that in many cases the technology itself isn't new, and in some cases the concepts aren't even new. If you look at something like Java, the origin of Java and virtual machines goes back to Berkeley Pascal, which is decades old, but it wasn't ready because the hardware wasn't ready. The ideas were there.
In the case of application virtualization, people were taking this new thing and saying, "What can I do with it?" What they found was you could, indeed, build complete applications into a virtual machine, and for a while people were passing around VMware images to run.
VMware actually had a Player whose job it was to take these images and provide them to people, but they turned out to be heavy‑weight. They tended to not be designed to interact with lots of things outside. It was a great place to put everything inside, but once you had it inside it didn't interact very well with anything else.
We've gone on with machine virtualization as the hardware's gotten better. We've used it in ways that we found appropriate. We weren't sure what the patterns would be back then. We found the patterns for machine virtualization, but they didn't include this kind of idea. It didn't seem to work out.
The impetus for containerization, which is a reuse of old multi‑tenant log onto the machine, get an account style computer use...with the creation of things like cgroups and the namespaces and the pieces that came from Solaris for zones we've suddenly got a new way of putting all of this stuff together which promises to give us what we were looking for back in 2000 with machine virtualization but which didn't seem to be the best use of resources.
We're looking now at containerization that is much lighter weight. It doesn't provide its own operating system, it doesn't provide its own networking stack, it doesn't provide its own self‑contained disk storage in the containerization mechanism. Those both pose problems because we have to do the isolation, which the cgroups has done.
They also provide opportunities because you could now put greater density on these containers, assuming we could solve the access problems. If we could solve these other missing pieces, or make it so they're not necessary, we could achieve the goal that they were trying with machine virtualization a decade ago in something that actually scales well.
Gordon:  The other thing that has happened is the use case has changed, so to speak. Containers were actually reasonably successful in at least certain niches of the service provider world. The reason was that in the service provider world you have these very standardized types of machines that people use. You couldn't manage that scale any other way.
At the time, that was very different from the way most enterprises were set up where you had all these unique operating system instances. In a sense, the service provider model has come to the enterprise. The service provider type use cases are now, increasingly, a better model, certainly for IAAS and cloud‑style work loads.
There's a couple of things you talked about I'd like to dive into a little bit deeper, but it would be useful to, first of all, we've been throwing around Docker, we've been throwing around containers. Sometimes those two concepts, those two technologies, are viewed as different names for the same thing, but that's not really true, is it?
Mark:  In a practical sense, right now it's fairly close to true. I try to make the distinction between containers, which are a conceptual object, and Docker, which is a current implementation we're working with, because the possibility exists that somebody's going to come along and create one that is different.
We were just talking a couple of minutes before we started about Docker within Microsoft, and I observed that some of the things that make Docker possible are, at least right now, very Linux‑centric features ‑ the cgroups and namespaces. I don't know the state of those technologies within the Microsoft operating systems.
There's certainly no barrier I can think of to creating them, or the features may already even be there if you know the right places. I can imagine that someone would create a Microsoft version or even a Macintosh Darwin version of what Docker does using slightly different technologies.
Containerization is certainly more general. It's putting processes on an operating system and then creating a new view for them so that they don't see the same thing that a general process does. Docker does that.
Gordon:  The other interesting aspect of Docker, of course, though, is it goes beyond the foundation infrastructure element and gets into how you package up applications and how you deploy applications. That's the other thing that has gotten people interested in containers broadly and Docker, in particular, these days.
Mark:  I can remember the first time I saw Docker. I had been working on OpenShift on the isolation pieces. OpenShift is a platform as a service. It provides multi‑tenant services on a single box without virtualization, without traditional virtualization. I'd been working on the isolation pieces so that one customer's processes couldn't see what was happening with another customer's processes and couldn't damage them or whatever.
I was doing lots of ad hoc work to try and make all of this stuff. When I saw Docker I was like, 'OK, that's it. I've been going at it the hard way all along.' The novelty, as you said, wasn't the underlying technologies. The novelty was that they had made a good set of starting assumptions and then made those assumptions easy.
The problem with a lot of container systems ‑ we were struggling with OpenShift and people had problems with Solaris Zones ‑ was that it was difficult to create a new one because there were a lot of moving parts and knobs. One of the Docker novelties was that you've got this very concise, very clear mechanism for describing what goes into your images which form your containers, and creating new ones is very easy.
That's in contrast to everything that I've seen that went before, and that was the thing that a lot of people have jumped onto. It's not that this was necessarily new but all of a sudden it's easy to use at least in its simplest form.
Gordon:  We've been talking about the individual microservice or the individual application in the context of containerization and Docker. You've also been looking at some of the ways that you can orchestrate, manage, scale groups of Docker containers together.
Mark:  One of the things that I was interested in right away was the fact that Docker is designed to work on a single host. When you create a container, all of the commands that you use in Docker all relate to the single host with one exception which is pulling down new images. They can come from the Docker hub or they can come from another repository.
Once you get them started, they all run on one host. That's where Docker stops, but we're all getting very used to the idea of cloud services where the location of something shouldn't matter and the connectivity between it shouldn't matter. When you start thinking about complex applications using containers, the first thing you think is, 'OK, I'm going to put these everywhere and I don't care where they are.'
Docker doesn't address that. There are a number of things where people are developing them. One of them that we're looking at using strongly at Red Hat is Kubernetes. Kubernetes is a Google project which I believe comes out of their own internal containerization efforts. I believe they've started to re‑engineer their own container orchestration publicly to use Docker.
They're still just beginning that. It became evident very soon that trying to build complex applications which span multiple hosts with Docker and with Kubernetes that there are still pieces that need to be ironed out. I picked a project called Pulp that we use inside Red Hat and that we use for repository mirroring because it had all the parts that I needed. It was a good model.
It has a database, it has a message broker, it has some persistent storage, it has parts that are used by multiple processes that are shared by multiple processes. Those are all use cases and use models that I know people are going to want to do, and Pulp has them all in a single service.
I thought, 'If I can get Pulp working on Docker using Kubernetes then that would expose all of these issues.' Hopefully I can get something that actually works, but in the process I should be able to show where the things that are easy and, more importantly, where the things that are still hard or still unresolved are.
Gordon:  How do you see potential listeners, customers, how are they most likely to encounter Docker and how are they most likely to make use of it in the near‑term and mid‑term?
Mark:  The funny thing is that they won't know it. If all of this works out, people who are using web applications or cloud applications they won't know and they won't care what the underlying technology is.
When it comes to people developing applications, that's where things are going to get interesting. Those people are going to see something fairly new. We're used to doing web applications where we have a web server and maybe a database behind it and we have traditional multi‑tier applications. We're moving those into the cloud using traditional virtual machine virtualization.
One of the things that Docker and Kubernetes promises is to create a Lego system, a kind of building block system, for whole applications that hasn't been present before. When we deliver software at Red Hat traditionally we deliver them as RPMs, which are just bundle of bits, and they go on your box and then you're responsible for tying all the parts together. We may write some configuration scripts to help do that.
What Docker promises to do, if everything works out and we address all the problems, is that you would be able to go down a shelf and say, "OK, here's my database module, here's my storage module, here's my web service module."
I could imagine a graphical application where you drag these things off some kind of shelf, tie them together with some sort of buttons, and say, "Go." That thing goes out and comes into existence and tells you where it lives and you can point people at it and put code there. The developers are going to be the people who see this.
There's also going to be another layer of developers who are the people who create these things. That's still something that's up in the air, both the magical, unicorn world I just described and the dirty hands of creating these things. Those are both works in progress.
Gordon:  Sounds like object‑oriented programming.
Mark:  Except that object‑oriented programming quickly devolved into 'it's exactly what I want only not.' We have to avoid that pitfall. We need to figure out ways of making good assumptions for what people are going to need and indicating what those assumptions are and providing ways for people to extend things but also incentive not to. To use things as they are when it's possible and to extend them when it's necessary.
Gordon:  It's interesting. Going back to the beginning of this podcast, we talked about application virtualization. As you correctly said, the reason you're maybe not super familiar with it was it never really took off as a mainstream set of technologies. On the client side, what did take off, because it solved real problems and was what people were looking for, is, essentially, things like the Android app store, the Apple app store.
That is client‑side application virtualization, in a sense. I think that same type of solve actual problems that people have without turning it into some complex OASIS  or ISO type of complexity, that's where you'll see the real benefit and you'll see the real win.
Mark:  I agree. That store was made possible by advances in network technology and storage technology. We're seeing advances which cause the same kind of disruption in the programming and software developing mechanism.
When Android and iOS were designed, they were designed specifically to set the boundaries on what developers could choose. There's an Android SDK which gives you an abstracted view of the underlying hardware for whether it's phone or GPS or storage or CPU or whatever, and iOS has similar abstractions. When you develop an iOS app, you're still developing one self‑contained thing which uses these parts.
In a lot of cases, you create one which lives on the phone but communicates with some service out in the cloud, and that's a very common model now. It'll be very interesting to see a couple of different things that could develop.
One is that people will use these kinds of containers to create the web‑side applications that go with things like an Android app or an iOS app, but the possibility exists that you could actually create these things and run them on your phone. You could create something where your application is composed of a set of containers that run on your phone.
Those things still need to be worked out because right now ‑ we had discussed at another time ‑ Android runs Linux and Linux, as of not too long ago, has cgroups in it, which is the enabling technology for Docker containers. It's already there. You could conceivably write an application in C that you could load onto an Android phone that would run using Docker containers.
You need to port Docker. There are lots of things that need to be moved over, but all of those are software above the operating system. Android phones already have that operating system part there. I don't think it'll be very long before people are trying to add the parts, and it'll be interesting to see if Google has any strategy for containers, for Docker‑style containers, in the Android environment.
If they don't do it I suspect somebody else is going to try it, and we'll find out whether it works or not when they do.
Gordon:  Great, Mark. There's lots more things I'd like to discuss, but we're going to cut off this podcast and save up some of those other topics for next time. Thanks, Mark.
Mark:  You're welcome. Thank you very much.

Gordon:  Thank you everyone...

Monday, October 27, 2014

Why is lobster "market price"?

The price of lobster, like the price of anything else, is set in a market. But the market price you pay is fundamentally a price determined by the restaurant market, not the market for lobsters. And the issue is a basic one of capacity and competition.

Think back to the Fisherman’s Friend and its excellent location. Stonington is a great place to visit. But it’s also a very small town. There aren’t very many places to eat. And if it’s a certain kind of coastal Maine seafood dinner experience you’re after, there aren’t any other places in town to go. There’s a little reason to fear losing customers to the boil-at-home option as lobster prices fall but no reason to worry about a nearly identical competitor next door poaching your customers. Nor is there a nearly identical competitor next door whose customers you might hope to poach with a discount.

Cooking and eating lobster at the house in Maine

I’ve noticed this frequently in Maine. The lobster price (especially for small, soft shell lobsters—i.e. the most advertised price) is a very competitive market-driven thing. The same boiled lobsters at lobster pounds are too because they’re pretty hard to decouple from the live and kicking versions. But lots of ot

her forms of lobster including lobster rolls and even refrigerated lobster meat tend not to drop accordingly.

It’s also worth noting, per another conversation I had recently, that it’s not immediately obvious why so many restaurants list their lobster as “market price” given that the price of many of their fish and other expensive ingredients presumably vary by season as well. My cynical nature wonders if this isn’t primarily a ploy to just not publish the price and use that lack of transparency to wrest a few extra dollars for a perceived luxury item. 

Friday, October 24, 2014

Review: Nixeus in-ear earphones/mic

Chart e1401383896513

Given the amount of traveling that I’ve been doing over the past couple of years, I decided to kick off a series on this blog taking a look at some of the (often morphing but fairly compact) pile of gear with which I travel. This is the inaugural post on this theme.

I favor big over-the-ear headphones when I’m editing podcasts at home. For travel? Not so much. Small and lightweight is the name of the game whether I’m plugging into a conference call or just listening to some music.

The Nixeus in-ear earphones are a nice example of compact earphones that can be used for either phone calls or listening. Their MSRP is $39.95 but they’re available for about half that on Amazon as of this writing.

They come with three sets (S, M, and L) of roughly cylindrical foam earbuds that you can use to tailor their fit. Like other earphones of this general type, the idea is to fit them relatively snugly into your ear—both to better block ambient sound and to keep them from falling out. From a fit perspective, I think of this type of design as something intermediate between iPod-style earbuds which just sit loosely in the ear and the silicone-style ear tips which you press fairly tightly into your ear canal. 

One of the challenges with reviewing this type of product is that fit and comfort are ultimately very much a matter of preference and the geometry of your particular ear. For extended music listening, I still prefer the silicone ear tip design such as Klipsch uses for its (significantly more expensive) X4i. On the other hand, I know a fair number of folks who just don’t like what they describe as “jamming” said silicone ear tips into their ear. 

What I can say is that the Nixeus earphones have a much more solid fit than do standard ear buds and, in part for this reason, their audio quality is commensurately better as well. The sound quality (both for the earphones and the mic) is as good or better than other examples of the same general design which I’ve tried.

Really, for the price, if you’re still using basic earbuds, give these or something else like them a spin. You’ll be glad you did. With Christmas coming up, it’s also probably worth mentioning that the Nixeus packaging is sleek and modern (with a magnetic closure for the box) so it looks like something costing more than it actually does.  

[Disclaimer: These earphones were provided to me for review purposes. No other compensation was provided and the opinions in this review are mine alone.]

Wednesday, October 22, 2014

Podcast: Private and hybrid storage sharing using ownCloud

ownCloud CTO and co-founder Frank Karlitschek sat down at CloudOpen in Dusseldorf to talk about how ownCloud lets companies offer their employees a "DropBox-like" experience while retaining control of where data is stored and how it is accessed. It's a hybrid approach to cloud storage that's can be important in a world where jurisdictional concerns can be a major CIO headache.

How ownCloud works in concert with Red Hat Storage Server
ownCloud whitepapers

Listen to MP3 (0:15:20)
Listen to OGG (0:15:20)

Wednesday, October 15, 2014

Links for 10-15-2014

Sunday, October 12, 2014

What do people mean by cloud security?

Security continues to top the charts when IT folks are asked what thing most gives them pause about using a cloud—especially a multi-tenant public one. This invites the retort: “Do they think you know how to better secure systems against attackers than Amazon?” Probably not. But “security” in this case often means something quite different than just keeping the bad guys out.

A general observation that isn’t particularly original. Back in 2011, I was writing about how cloud governance was about more than security. More recently, I’ve given many presentations delving into how cloud security was a much broader topic than just security classic.

But the extent to which cloud “security” goes beyond just security classic *most classic concerns still matter as well) was reinforced during a couple of sessions at 451 Research’s Hosting + Cloud Transformation Summit held in Las Vegas last week. And they provided some color about what people mean by that “security” word as well.

In his keynote, Research VP William Fellows reiterated that security—perceived and real—continues to come up regularly in cloud discussions. However, he went on to say that it’s actually jurisdiction which is the number one question. Perhaps not surprising really given the headlines of that the last year but it reinforces that when people voice concerns about security, they are often talking about matters quite different from the traditional Infosec headaches. (Attorney Deborah Salons sat down to do a podcast with me early last year on data governance issues. The link includes a transcript for those who prefer reading.)

Providersecurity

Michelle Bailey, VP of Datacenter Initiatives and Digital Infrastructure, fleshed out these security concerns in more detail during her session. The question she was answering was a bit different: “What are the top three things that providers can do about security?” Presumably certain types of security concerns (e.g. malware in a company’s POS systems) aren’t something a provider could be expected to do a lot about. Nonetheless, I expect there’s a high correlation between someone being concerned with some aspect of security and valuing providers who can mitigate that risk.

Data locality comes up here too. This is a hot topic among cloud providers and one of the reasons, besides sheer volume, for their rush to build new data centers. In other words, people want to be able to choose, say, an Amazon region that is sufficiently constrained geographically from the perspective of judicial orders or other authority. It’s about knowing the laws to which they may be subject.

But broadly, I’d characterize the top wants as being fundamentally about visibility and control. Transparency, auditability, verifiable encryption, control over encryption. And indeed pretty much the whole rest of the list is either related characteristics or various standards and documentation to help ensure that cloud providers do the things they promise to do.

Conspicuously lacking is pretty much anything in the vein of physical security or DDOS mitigation or firewall configurations. That’s because, while important, they’re largely viewed as solved problems from the perspective of the cloud provider.

Mind you, given the shared responsibility model that comes into play when you use a cloud provider, you share responsibility for the workloads that you’re running on the cloud provider. You’re still running and patching the operating system running in the cloud. But you know how to do that; you basically do the same thing you do on-premise. (Obligatory plug for Red Hat Enterprise Linux and our Certified Cloud Provider Program here. I should have a new whitepaper out soon.) 

For these and other reasons, Michele concluded that “ the end game isn’t public cloud, it’s hybrid cloud. And you can bet on that for the next 5 years.” And that security, among other factors, will lead to hosting providers remaining a  "very long tail market” in which  messaging, targeting, and matching strengths with customer requirements will continue to offer many opportunities for differentiation. 

Sunday, October 05, 2014

Topsfield Fair

I went over to the Topsfield Fair on a drizzly Saturday. I'd never been to this one and hadn't been to the Bolton Fair (which is actually held in my town of Lancaster) for a few years. It is so New England-y!

Thursday, October 02, 2014

Links for 10-02-2014

Wednesday, September 24, 2014

Podcast: OpenShift Enterprise v3 with Joe Fernandes

In this podcast, Red Hat's Joe Fernandes talks about all the technologies and products that come together in OpenShift Enterprise v3, the upcoming on-premise version of Red Hat's platform-as-a-service. These include Red Hat Enterprise Linux and Project Atomic, Linux Control Groups, Docker, and Kubernetes.

Joe runs product management for OpenShift and he brings a great perspective on how these various capabilities dovetail with each other to deliver what's ultimately the most important thing: the developer experience.

Listen to MP3 (0:20:04)
Listen to OGG (0:20:04)

[Transcript]

Gordon Haff:  Hi everyone. This is Gordon Haff in Cloud Product Strategy at RedHat here with another episode of the Cloudy Chat podcast.
Today I'm joined by Joe Fernandes who runs product management for OpenShift.
Gordon:  Today we're going to talk about OpenShift V3, the next upcoming version of OpenShift, which is Red Hat's enterprise PaaS.
Not so much to get into a lot of product details but because it's a convenient way to talk about all of the interesting new technologies that we're working on and which are coming into Red Hat today. OpenShift pulls these together.
Joe, at a top level why don't you lay out the agenda and then we can dive in some of the details?
Joe:  We've been out with OpenShift for quite some time. We actually launched it in developer preview over three and a half years ago now. Today we have both the commercial public PaaS service in OpenShift Online as well as a private PaaS product in OpenShift Enterprise.
As you mentioned OpenShift ties together a lot of different Red Hat and open source technologies and wraps a nice developer experience around it. OpenShift is built around Linux containers. It deals with orchestration of those containers at very large scale. It packages in a bunch of interesting developer services and, like I said, wraps that around a nice experience to accelerate application development and bring more agility to companies who are building applications.
Gordon:  Let's break that down into its components. First of all you got a platform there. Tell us a little bit about the underlying platform that OpenShift runs on.
Joe:  OpenShift is built on a platform of Red Hat Enterprise Linux. We make use of a lot of technology within RHEL to basically bring the OpenShift product and all of its capabilities to light. Today OpenShift is built on RHEL 6, but what we're working on in V3, as you allude to is bringing this on to RHEL 7 and to move to our new container model which will be based around Docker.
OpenShift will leverage the Docker capabilities that were introduced in RHEL 7 when it was launched in June. It will also be able to make use of the RHEL Atomic host, which is a new product that we've announced around the upstream Project Atomic.
RHEL Atomic host is not commercially available yet but Project Atomic is an active community that we've basically launched back in the Summit time frame in April to work on an optimized Linux host that's optimized around containerized applications. That's important to OpenShift because everything we run runs in Linux containers.
What we can deliver here with Atomic is a very lightweight host that's optimized for those environments but it also brings a new model for how you manage the hosts and you're managing it in an Atomic way as well.
Gordon:  What the interesting thing about Atomic is people out there say, "With platform-as-a-service the operating system doesn't matter any longer." VMware was on that particular kick a couple of years ago and they still don't believe the operating system is important. I wonder why.
One of the things we see with Atomic and with Red Hat Enterprise Linux more broadly is that as we are talking about these containers we're talking about the application packaging through Docker. The operating system is very much at the core of all that.
Joe:  Absolutely. Those applications have to run somewhere. They're running in Docker containers but those containers are running on a host OS. It's that host OS that's providing a lot of the capabilities like security, like isolation through kernel namespaces. Security, we do a lot of work with SE Linux to implement the layered security model. Like cgroups for resource confinement, all of that comes from the host operating system.
We have a long history of working on Linux kernel technologies going back more than a decade. A lot of that expertise is what we're bringing to not only Atomic but to communities like Docker and other communities in the Docker ecosystem and in the containers ecosystem.
Gordon:  Let's talk a little more about Docker and vis‑a‑vis application packaging specifically around Docker and how that relates to OpenShift and how it relates to our cartridge model.
Joe:  It's funny. A lot of people are starting to hear about Linux containers due to the popularity of Docker, but the underlying containers technology has been around for a while. Things like Linux control groups, as I mentioned, kernel namespaces. We've been using those for years in OpenShift and even further back our customers have been using those on RHEL 6. Companies like Google have been using containers technology at scale for a long time as well.
What Docker really brings is a new packaging model for applications that run inside containers and enables, what we refer to as, application‑centric IT. This packaging model that basically starts with a Docker file that packaged into an Atomic Docker image brings things like portability of your applications across different environments.
The Docker image runs everywhere that a compatible Linux host would run. As people know Linux runs everywhere. That means you'll be able to take that image and run it not only in the public cloud in app providers like Google and Amazon who have already announced support for Docker but in your private cloud as well regardless of whether that private cloud is built on OpenStack or VMware virtualization technologies or even on bare metal servers.
It spans all the footprints from bare metal to virtualization to private cloud and even public cloud. That's powerful.
Some of the other things that we see are the benefits that it brings to development and to the operations team that runs the PaaS environment. When we launched OpenShift more than three years ago we decided to build it around containers because of the speed with which you could deploy applications within those containers.
When a developer comes to OpenShift they can spin up applications in seconds. That wouldn't be possible if we were spinning up a full guest VM for each application or a group of VMs for larger applications where you have to bring up an OS within those VM guests. The speed and agility that containers enable for developers is very exciting.
It also brings a lot of efficiencies for IT both in making more efficient use of the infrastructure that it runs on, getting more density of applications per host but also making it more efficient to manage because you're managing a single host kernel that's now running multiple applications as opposed to managing a host kernel and then guest OSs for each VM.
All these capabilities like I mentioned come not just from Docker but from the broader containers movement. It's exciting to see so many people getting involved in that movement and getting new capabilities being introduced in this area both from Red Hat and ISVs like Docker.
Gordon:  OpenShift offers features like automated scaling that can span a number of servers, OpenShift can run a large population of VMs or bare metal servers, your choice where to run it. Maybe talk a little about where we're going with handling that multi‑host server environment.
Joe:  As you mentioned, containers are great but applications don't run in a single container. Typical applications that we see in OpenShift will span multiple containers. They don't all just run on one host they're going to span multiple hosts.
In our OpenShift online environment, for example, we have hundreds of VM instances hosting hundreds of thousands of applications. Each application may have a handful to a large number of containers that form that stack. This is all an orchestration and scheduling challenge and it's the role of the OpenShift broker.
When we decided to architect OpenShift V3 one of the key decisions was moving to the Docker standard for containers. We were excited to see a standard building around containerization. What we also have decided is to take our orchestration technology and work with Google on the Kubernetes project.
Kubernetes brings what we think will be very exciting web scale orchestration capabilities to OpenShift V3 and to the broader open source community. We've joined forces with Google and a number of other customers around orchestrating Docker containers via Kubernetes.
It gives us the model for how these application stacks are built through concepts like Kubernetes pods deployed across different instances and allows us to do things like connect a container running your application. Say it's a JBoss application or Tomcat application to another container that maybe running your database tier. Or connecting it to a cluster of database instances that are each running in their own container, or scale up a cluster of Tomcat or Python or PHP instances, connect that to a load balancer or a web tier.
All these things are different containers that need to be orchestrated and then deployed. That's where orchestration comes in in the OpenShift V3 model.
Gordon:  Let's switch gears a little bit from the underlying technology enabling OpenShift to what's happening in the services that run on OpenShift with the ecosystems associated with OpenShift. What's happening there?
Joe:  This gets to the heart of what matters to developers. All that underlying stuff ultimately is what enables developers to build the applications that they care about. In order to do that you need to be able to support the languages, the frameworks that they want to build those applications in and be able to provide the other services their applications require whether that's databases or messaging or other middleware, even continuous integration and other capabilities.
The packaging that we have in OpenShift for that today is called OpenShift Cartridges. In V3 that packaging will be based on the Docker images. This is the biggest thing we saw in Docker was a very flexible and powerful packaging format for these application components and then a very broad ecosystem in the Docker hub where literally thousands of these component images exist.
Whether you are looking for a Ruby stack or a Python stack or you're looking for an image from MongoDB or MySQL or what have you, you can find not just one but hundreds of examples of these services in the Docker hub today. Our goal in OpenShift V3 is to allow you to run arbitrary images from that ecosystem in your OpenShift environment.
A couple of things there. We're going to start with our own products. We in OpenShift we leverage technology from our JBoss middleware division as well as technology that's packaged up for RHEL and as part of Red Hat software collections to provide some of the things that come out of the box when you use OpenShift or purchase OpenShift Enterprise. Things like JBoss, things like Tomcat, we're about to launch a new Fuse cartridge and so forth. We're packaging all of our existing products as Docker images so that they can be available as services within OpenShift V3.
What we're also doing is working with ISVs on certifying their images through our Red Hat container certification program to also run on not only RHEL but products like OpenShift that build on top of RHEL and RHEL Atomic. What this does is it enables enterprises to know what's the supported option or what is the safe choice if I'm looking for a particular stack.
They know that that's a good option because that's something that's certified not only by the ISV but also by Red Hat. That's the goal there. But ultimately the end goal is to provide as many services as we can to show that there's a limitless number of applications that customers can run on OpenShift and deploy and manage there as well.
Gordon:  To finish things up here talk a little bit about for what all this means for the developer experience because that's really the name of the game.
Joe:  The developers care about what you have available, what's in their palette to build and then what you enable for them to build it. Is this a familiar environment, do they have the development tools, the debugging tools that they need to build their applications in a way that's natural for them.
We've focused on giving different developers different interfaces to access OpenShift. That includes our web console, our command line interface, and also various IDEs. The Eclipse IDE through JBoss Developer Studio as well as support for other IDEs like IntelliJ, like Appcelerator and so forth that comes from our ISV partners that we work with.
That's a great starting point. From those interfaces developers can work on their code and then push that code directly to their OpenShift environment. We also then recognize that the developer, again, they need the container to provide the stack but what they care about is their code.
If they are a Java developer they care about their Java code. If they are a Ruby developer they care about their Ruby code. OpenShift basically allows the developer to take their code in whatever repository it lives in and then push that code directly to the platform, automatically have that compiled and running in a container and accessible in the application.
You can also push binaries as well, but OpenShift is unique in allowing you to push either source code where we build it, we manage all your dependencies, we deploy the application or push already configured binaries. That's going to continue in V3 through our integration with Git and the ability to essentially do a Git push of your code and have it automatically rebuild your container image, your Docker image with that code combined with the underlying stack whether it's JBoss or Apache or whatever stack is running it.
Lastly is debugging. Some popular features include being able to do debugging direct from Eclipse, which you can do in OpenShift. Features like port forwarding, features like being able to SSH directly into your containers, into your gears and being able to actually work directly with the underlying runtimes, or run something like a database management tool to work with your database instances and so forth.
These are things that are all part of the developer experience that people value today in OpenShift and that we're bringing forward and continuing to build on as we move forward in OpenShift V3.
Gordon:  By the way, if there are any developers listening in in this and you haven't tried OpenShift online, why not? It's free, no credit card, no anything like that. It's so simple even I was able to develop an application with it.
Joe:  That's the beauty. We get a lot of benefit by basically seeing both sides. In OpenShift Online, as you mentioned, any developer can come to openshift.com and with just an email address sign up for an account and get free access to up to three gears or three containers where they can host one to three applications, and those are gears that don't expire.
Then we have commercial plans. As you expand and need more capacity or if you want professional support you can expand into one of our commercial tiers in the online service. That also provides great value to our OpenShift Enterprise customers because all of the things that we learn by serving the online developers and running the platform ourselves at scale it feeds back into our OpenShift Enterprise product which we then package up and then ship to customers who want to deploy their own PaaS.
In a sense they're the administrator, our enterprise customers are deploying and administering OpenShift itself and they learn from talking to our own administrators and benefit from the features and enhancements that we put into OpenShift to run our own online service.
We really are eating our own dog food at large scales and then able to run those businesses in parallel to mutually benefit each other.
Gordon:  I should mention that, because otherwise I'm going to get into a lot of trouble with our fine community manager, that OpenShift Origin is the open source project. It's the upstream version.
Joe:  That's right. Every commercial product at Red Hat is based on an open source upstream community project. That's where all our development happens, that's where the innovation lives. OpenShift Origin serves as the upstream for both OpenShift Online and OpenShift Enterprise.
Our community isn't just limited to Origin as we've already discussed. Origin actually ties in other communities. On the host side it ties in communities like Project Atomic, like Fedora, even CentOS. On the container side the Docker community is a community that we participate heavily in and we pull that into Origin.
I already mentioned on the orchestration side we're members of the Kubernetes community and developing there. We're not just pulling the stuff in. Red Hat, which as has noted in a couple of articles recently, is actually one of the leading contributors to projects like Docker as well as projects like Kubernetes.
We're bringing the code versus just talking about these things or forking or pulling in these projects. We're working closely with each of these communities and those are the communities that make up OpenShift. Origin is upstream but Origin is at the center of a lot of other important communities.
I failed to mention the many JBoss communities that provide a lot of the middleware. Things from the jboss.org ecosystem, whether it's WildFly on the application server side or the Fuse ecosystem on the integration and others as well.
It's exciting to work with so many vibrant communities and interesting to work on a product that pulls all these things together into a cohesive solution for our end users.
Gordon:  Thanks a lot, Joe. Anything we missed?
Joe:  For folks who aren't familiar with OpenShift I encourage you to sign up for a free account. Give us a try. If you're looking for more information feel free to reach out to us through openshift.com
We're excited to have more folks involved in what we're doing in V3. We have some information on our website on how you can get a look at our latest and greatest stuff that's built around Docker and Kubernetes and Atomic and some of the things we've discussed here.

There'll be a full beta this fall but there's already code in the Origin upstream that you can download and try. We're looking at commercial availability on that platform some time next year.