Wednesday, September 24, 2014

Podcast: OpenShift Enterprise v3 with Joe Fernandes

In this podcast, Red Hat's Joe Fernandes talks about all the technologies and products that come together in OpenShift Enterprise v3, the upcoming on-premise version of Red Hat's platform-as-a-service. These include Red Hat Enterprise Linux and Project Atomic, Linux Control Groups, Docker, and Kubernetes.

Joe runs product management for OpenShift and he brings a great perspective on how these various capabilities dovetail with each other to deliver what's ultimately the most important thing: the developer experience.

Listen to MP3 (0:20:04)
Listen to OGG (0:20:04)

[Transcript]

Gordon Haff:  Hi everyone. This is Gordon Haff in Cloud Product Strategy at RedHat here with another episode of the Cloudy Chat podcast.
Today I'm joined by Joe Fernandes who runs product management for OpenShift.
Gordon:  Today we're going to talk about OpenShift V3, the next upcoming version of OpenShift, which is Red Hat's enterprise PaaS.
Not so much to get into a lot of product details but because it's a convenient way to talk about all of the interesting new technologies that we're working on and which are coming into Red Hat today. OpenShift pulls these together.
Joe, at a top level why don't you lay out the agenda and then we can dive in some of the details?
Joe:  We've been out with OpenShift for quite some time. We actually launched it in developer preview over three and a half years ago now. Today we have both the commercial public PaaS service in OpenShift Online as well as a private PaaS product in OpenShift Enterprise.
As you mentioned OpenShift ties together a lot of different Red Hat and open source technologies and wraps a nice developer experience around it. OpenShift is built around Linux containers. It deals with orchestration of those containers at very large scale. It packages in a bunch of interesting developer services and, like I said, wraps that around a nice experience to accelerate application development and bring more agility to companies who are building applications.
Gordon:  Let's break that down into its components. First of all you got a platform there. Tell us a little bit about the underlying platform that OpenShift runs on.
Joe:  OpenShift is built on a platform of Red Hat Enterprise Linux. We make use of a lot of technology within RHEL to basically bring the OpenShift product and all of its capabilities to light. Today OpenShift is built on RHEL 6, but what we're working on in V3, as you allude to is bringing this on to RHEL 7 and to move to our new container model which will be based around Docker.
OpenShift will leverage the Docker capabilities that were introduced in RHEL 7 when it was launched in June. It will also be able to make use of the RHEL Atomic host, which is a new product that we've announced around the upstream Project Atomic.
RHEL Atomic host is not commercially available yet but Project Atomic is an active community that we've basically launched back in the Summit time frame in April to work on an optimized Linux host that's optimized around containerized applications. That's important to OpenShift because everything we run runs in Linux containers.
What we can deliver here with Atomic is a very lightweight host that's optimized for those environments but it also brings a new model for how you manage the hosts and you're managing it in an Atomic way as well.
Gordon:  What the interesting thing about Atomic is people out there say, "With platform-as-a-service the operating system doesn't matter any longer." VMware was on that particular kick a couple of years ago and they still don't believe the operating system is important. I wonder why.
One of the things we see with Atomic and with Red Hat Enterprise Linux more broadly is that as we are talking about these containers we're talking about the application packaging through Docker. The operating system is very much at the core of all that.
Joe:  Absolutely. Those applications have to run somewhere. They're running in Docker containers but those containers are running on a host OS. It's that host OS that's providing a lot of the capabilities like security, like isolation through kernel namespaces. Security, we do a lot of work with SE Linux to implement the layered security model. Like cgroups for resource confinement, all of that comes from the host operating system.
We have a long history of working on Linux kernel technologies going back more than a decade. A lot of that expertise is what we're bringing to not only Atomic but to communities like Docker and other communities in the Docker ecosystem and in the containers ecosystem.
Gordon:  Let's talk a little more about Docker and vis‑a‑vis application packaging specifically around Docker and how that relates to OpenShift and how it relates to our cartridge model.
Joe:  It's funny. A lot of people are starting to hear about Linux containers due to the popularity of Docker, but the underlying containers technology has been around for a while. Things like Linux control groups, as I mentioned, kernel namespaces. We've been using those for years in OpenShift and even further back our customers have been using those on RHEL 6. Companies like Google have been using containers technology at scale for a long time as well.
What Docker really brings is a new packaging model for applications that run inside containers and enables, what we refer to as, application‑centric IT. This packaging model that basically starts with a Docker file that packaged into an Atomic Docker image brings things like portability of your applications across different environments.
The Docker image runs everywhere that a compatible Linux host would run. As people know Linux runs everywhere. That means you'll be able to take that image and run it not only in the public cloud in app providers like Google and Amazon who have already announced support for Docker but in your private cloud as well regardless of whether that private cloud is built on OpenStack or VMware virtualization technologies or even on bare metal servers.
It spans all the footprints from bare metal to virtualization to private cloud and even public cloud. That's powerful.
Some of the other things that we see are the benefits that it brings to development and to the operations team that runs the PaaS environment. When we launched OpenShift more than three years ago we decided to build it around containers because of the speed with which you could deploy applications within those containers.
When a developer comes to OpenShift they can spin up applications in seconds. That wouldn't be possible if we were spinning up a full guest VM for each application or a group of VMs for larger applications where you have to bring up an OS within those VM guests. The speed and agility that containers enable for developers is very exciting.
It also brings a lot of efficiencies for IT both in making more efficient use of the infrastructure that it runs on, getting more density of applications per host but also making it more efficient to manage because you're managing a single host kernel that's now running multiple applications as opposed to managing a host kernel and then guest OSs for each VM.
All these capabilities like I mentioned come not just from Docker but from the broader containers movement. It's exciting to see so many people getting involved in that movement and getting new capabilities being introduced in this area both from Red Hat and ISVs like Docker.
Gordon:  OpenShift offers features like automated scaling that can span a number of servers, OpenShift can run a large population of VMs or bare metal servers, your choice where to run it. Maybe talk a little about where we're going with handling that multi‑host server environment.
Joe:  As you mentioned, containers are great but applications don't run in a single container. Typical applications that we see in OpenShift will span multiple containers. They don't all just run on one host they're going to span multiple hosts.
In our OpenShift online environment, for example, we have hundreds of VM instances hosting hundreds of thousands of applications. Each application may have a handful to a large number of containers that form that stack. This is all an orchestration and scheduling challenge and it's the role of the OpenShift broker.
When we decided to architect OpenShift V3 one of the key decisions was moving to the Docker standard for containers. We were excited to see a standard building around containerization. What we also have decided is to take our orchestration technology and work with Google on the Kubernetes project.
Kubernetes brings what we think will be very exciting web scale orchestration capabilities to OpenShift V3 and to the broader open source community. We've joined forces with Google and a number of other customers around orchestrating Docker containers via Kubernetes.
It gives us the model for how these application stacks are built through concepts like Kubernetes pods deployed across different instances and allows us to do things like connect a container running your application. Say it's a JBoss application or Tomcat application to another container that maybe running your database tier. Or connecting it to a cluster of database instances that are each running in their own container, or scale up a cluster of Tomcat or Python or PHP instances, connect that to a load balancer or a web tier.
All these things are different containers that need to be orchestrated and then deployed. That's where orchestration comes in in the OpenShift V3 model.
Gordon:  Let's switch gears a little bit from the underlying technology enabling OpenShift to what's happening in the services that run on OpenShift with the ecosystems associated with OpenShift. What's happening there?
Joe:  This gets to the heart of what matters to developers. All that underlying stuff ultimately is what enables developers to build the applications that they care about. In order to do that you need to be able to support the languages, the frameworks that they want to build those applications in and be able to provide the other services their applications require whether that's databases or messaging or other middleware, even continuous integration and other capabilities.
The packaging that we have in OpenShift for that today is called OpenShift Cartridges. In V3 that packaging will be based on the Docker images. This is the biggest thing we saw in Docker was a very flexible and powerful packaging format for these application components and then a very broad ecosystem in the Docker hub where literally thousands of these component images exist.
Whether you are looking for a Ruby stack or a Python stack or you're looking for an image from MongoDB or MySQL or what have you, you can find not just one but hundreds of examples of these services in the Docker hub today. Our goal in OpenShift V3 is to allow you to run arbitrary images from that ecosystem in your OpenShift environment.
A couple of things there. We're going to start with our own products. We in OpenShift we leverage technology from our JBoss middleware division as well as technology that's packaged up for RHEL and as part of Red Hat software collections to provide some of the things that come out of the box when you use OpenShift or purchase OpenShift Enterprise. Things like JBoss, things like Tomcat, we're about to launch a new Fuse cartridge and so forth. We're packaging all of our existing products as Docker images so that they can be available as services within OpenShift V3.
What we're also doing is working with ISVs on certifying their images through our Red Hat container certification program to also run on not only RHEL but products like OpenShift that build on top of RHEL and RHEL Atomic. What this does is it enables enterprises to know what's the supported option or what is the safe choice if I'm looking for a particular stack.
They know that that's a good option because that's something that's certified not only by the ISV but also by Red Hat. That's the goal there. But ultimately the end goal is to provide as many services as we can to show that there's a limitless number of applications that customers can run on OpenShift and deploy and manage there as well.
Gordon:  To finish things up here talk a little bit about for what all this means for the developer experience because that's really the name of the game.
Joe:  The developers care about what you have available, what's in their palette to build and then what you enable for them to build it. Is this a familiar environment, do they have the development tools, the debugging tools that they need to build their applications in a way that's natural for them.
We've focused on giving different developers different interfaces to access OpenShift. That includes our web console, our command line interface, and also various IDEs. The Eclipse IDE through JBoss Developer Studio as well as support for other IDEs like IntelliJ, like Appcelerator and so forth that comes from our ISV partners that we work with.
That's a great starting point. From those interfaces developers can work on their code and then push that code directly to their OpenShift environment. We also then recognize that the developer, again, they need the container to provide the stack but what they care about is their code.
If they are a Java developer they care about their Java code. If they are a Ruby developer they care about their Ruby code. OpenShift basically allows the developer to take their code in whatever repository it lives in and then push that code directly to the platform, automatically have that compiled and running in a container and accessible in the application.
You can also push binaries as well, but OpenShift is unique in allowing you to push either source code where we build it, we manage all your dependencies, we deploy the application or push already configured binaries. That's going to continue in V3 through our integration with Git and the ability to essentially do a Git push of your code and have it automatically rebuild your container image, your Docker image with that code combined with the underlying stack whether it's JBoss or Apache or whatever stack is running it.
Lastly is debugging. Some popular features include being able to do debugging direct from Eclipse, which you can do in OpenShift. Features like port forwarding, features like being able to SSH directly into your containers, into your gears and being able to actually work directly with the underlying runtimes, or run something like a database management tool to work with your database instances and so forth.
These are things that are all part of the developer experience that people value today in OpenShift and that we're bringing forward and continuing to build on as we move forward in OpenShift V3.
Gordon:  By the way, if there are any developers listening in in this and you haven't tried OpenShift online, why not? It's free, no credit card, no anything like that. It's so simple even I was able to develop an application with it.
Joe:  That's the beauty. We get a lot of benefit by basically seeing both sides. In OpenShift Online, as you mentioned, any developer can come to openshift.com and with just an email address sign up for an account and get free access to up to three gears or three containers where they can host one to three applications, and those are gears that don't expire.
Then we have commercial plans. As you expand and need more capacity or if you want professional support you can expand into one of our commercial tiers in the online service. That also provides great value to our OpenShift Enterprise customers because all of the things that we learn by serving the online developers and running the platform ourselves at scale it feeds back into our OpenShift Enterprise product which we then package up and then ship to customers who want to deploy their own PaaS.
In a sense they're the administrator, our enterprise customers are deploying and administering OpenShift itself and they learn from talking to our own administrators and benefit from the features and enhancements that we put into OpenShift to run our own online service.
We really are eating our own dog food at large scales and then able to run those businesses in parallel to mutually benefit each other.
Gordon:  I should mention that, because otherwise I'm going to get into a lot of trouble with our fine community manager, that OpenShift Origin is the open source project. It's the upstream version.
Joe:  That's right. Every commercial product at Red Hat is based on an open source upstream community project. That's where all our development happens, that's where the innovation lives. OpenShift Origin serves as the upstream for both OpenShift Online and OpenShift Enterprise.
Our community isn't just limited to Origin as we've already discussed. Origin actually ties in other communities. On the host side it ties in communities like Project Atomic, like Fedora, even CentOS. On the container side the Docker community is a community that we participate heavily in and we pull that into Origin.
I already mentioned on the orchestration side we're members of the Kubernetes community and developing there. We're not just pulling the stuff in. Red Hat, which as has noted in a couple of articles recently, is actually one of the leading contributors to projects like Docker as well as projects like Kubernetes.
We're bringing the code versus just talking about these things or forking or pulling in these projects. We're working closely with each of these communities and those are the communities that make up OpenShift. Origin is upstream but Origin is at the center of a lot of other important communities.
I failed to mention the many JBoss communities that provide a lot of the middleware. Things from the jboss.org ecosystem, whether it's WildFly on the application server side or the Fuse ecosystem on the integration and others as well.
It's exciting to work with so many vibrant communities and interesting to work on a product that pulls all these things together into a cohesive solution for our end users.
Gordon:  Thanks a lot, Joe. Anything we missed?
Joe:  For folks who aren't familiar with OpenShift I encourage you to sign up for a free account. Give us a try. If you're looking for more information feel free to reach out to us through openshift.com
We're excited to have more folks involved in what we're doing in V3. We have some information on our website on how you can get a look at our latest and greatest stuff that's built around Docker and Kubernetes and Atomic and some of the things we've discussed here.

There'll be a full beta this fall but there's already code in the Origin upstream that you can download and try. We're looking at commercial availability on that platform some time next year.

Photos from Bali this summer

Hike up Gunung Batur for sunrise. I was in Bali on business in June and I finally got around to processing the pics I took in a few days before and after the event. Many more up on flickr.

What's up with me this fall?

I’m been busy in September but have minimized the travel and even local events, in part to gain some focus time on a number of projects I needed to bang through. With October ‘round the bend though, I’ll be heading into the wild blue yonder again for speaking engagements and other purposes. My current schedule looks like the following. Feel free to reach out if you want to meet, record a podcast, or just have a beer.

  • Monktoberfest, October 2nd and 3rd, Portland ME.
  • 451 Research Hosting and Cloud Transformation Summit, October 6-8, Las Vegas, NV
  • CloudOpen/LinuxCon, October 13-15 Dusseldorf, Germany. I have two sessions. What Manufacturing Teaches us about DevOps and  The Cloud in 10,000 Words (or 10 Pictures). The former is a largely new presentation that takes a broad look at the ways in which manufacturing has evolved find parallels (and lessons for) DevOps. It’s a broader take on the process than just relatively recent lean manufacturing approaches and the like (i.e. Deming et al.). The latter is an update to the session that I gave at CloudOpen Chicago in August which looks at some of the important trends around cloud and related technologies (Big Data, IoT). 
  • CloudExpo, November 4-6, Santa Clara, CA. I’ll be doing a variant of What Manufacturing Teaches us about DevOps. I have passes for the show if you need one.
  • Amazon re:Invent, November 11-14, Las Vegas, NV. 
  • Cloud Law European Summit, November 25, London, UK. I’ll be keynoting on The Hybrid Cloud-the future of cloud computing. 

Monday, September 22, 2014

Links for 09-22-2014

Tuesday, September 09, 2014

The dark side of language diversification

My former colleague RedMonk's Stephen O’Grady writing about the diversification of language options and what might follow:

It may be difficult to conceive of a return to a more simple environment, but remember that the Cambrian explosion the current rate of innovation is often compared to was itself very brief – in geologic terms, at least. Unnatural rates of change are by definition unnatural, and therefore difficult to sustain over time. It is doubtful that we’ll ever see a return to the radically more simple environment created by the early software giants, but it’s likely that we’ll see dramatically fewer popular options per category.

Whether we’re reaching apex of the swing towards fragmentation is debatable, less so is the fact that the pendulum will swing the other way eventually. It’s not a matter of if, but when.

ProgLanguages

In the past, I’ve mildly disagreed with Stephen and his colleague Donnie Berkholz about whether we were really seeing anything more than the usual plethora of languages that see some use and even some hype but don’t have any real impact and eventually fade away. Most of the languages we use today would have been at least passingly familiar to Web 1.0 and enterprise programmers programmers of the dot-com era even if the mix has shifted over time.

However, Stephen and Donnie’s data has made me at least tentatively come to agree that there’s been an increase in fragmentation. Serious infrastructures, platforms, and apps being created with “non-mainstream” languages such as Scala. Go, from Google, is probably the latest major new entry; it’s used in Docker which is one of the hottest software projects happening right now.

Arguably, fragmentation doesn’t matter so much in a microservices world in which all manner of gratuitous differences can be abstracted from the underlying platform. And developers are supposed to be in control after all, aren’t they?

But as Stephen notes, quoting from a number of senior developers, there are limits to all this. Tim Bray, for example, points out that having to stay current in an overly broad toolchain takes away from having the time and attention to actually drain the swamp. And just because you can abstract gratuitous differences from the underlying platform doesn’t make gratuitous differences a good thing. Code usually needs to be maintained after all. 

There great choice of tools out there. And it’s important to support that choice in programming platforms (as Red Hat does with our OpenShift PaaS). But, at the same time, that choice can offer a lot of rope and it’s up to developers and programming teams to use that rope for good and not for ill. 

Image source: Startapp.com

Send a fax from the beach

There are things in here which are pretty spot on like the electronic tolls--probably aided by the fact that there were nascent working examples at the time the commercials were made. In others there are delightfully amusing anachronisms. For example, wirelessly connecting from the beach--to send a fax. Virtually tucking a baby in with a Facetime-like video call--from a phone booth.

Fascinating what it gets pretty spot-on and what seems delightfully archaic (e.g. fax from beach), "FaceTime" from a phone booth.

Monday, September 08, 2014

Disjoint Sets

Video: Is DevOps changing enterprise IT?

Prior to the last CloudExpo in New York, we trudged through the pouring rain to record some video panels. (There wasn't a taxi to be had and even Uber had a long wait even at 3x surge pricing.) But we kept things short and to the point and I think you'll find this worth your time.