Thursday, June 21, 2012

Links for 06-21-2012

Monday, June 18, 2012

Forecast: The cloudy future is still hybrid

Over at Archimedius, Greg Ness writes:

There was consensus from the panel on the rise of the private cloud and the eventual decline of the public cloud (IaaS). According to one panelist, “the private cloud is cheaper than the public cloud (for many enterprise environments).” While the public cloud will thrive for SMBs (because it reduces the expense threshold for technology services); private clouds will thrive for IT infrastructures above 500 kW.

This sentiment was also consistent with findings shared by a top editor over dinner at Interop (Lazard’s annual “emerging tech trends dinner” and by another high profile editor we briefed on our recent May 2012 Vantage data center tour. It appears that the public cloud peaked last year and is today receiving less interest from enterprises. More about this in coming weeks.

On the one hand, I find the contention that public clouds will "decline" overstated--even given Greg's clarification in the comments that "By 'decline of the public cloud' I do NOT mean that IaaS will shrink or has started shrinking, but rather that it will see declining enterprise share relative to private clouds and newly constructed enterprise data centers as well as PaaS." I agree that the public cloud economic argument is less compelling for organizations with the scale to build large-scale data centers. But any categorical "the private cloud is cheaper" arguments applied to not just today but tomorrow seems overly glib.

With that said, this discussion reinforces the very valid point that we're not on some unstoppable trajectory to an everything public world. Which is a point worth making about a trend that started out being talked about in precisely this vein. Cloud computing will not, in important respects, mirror centrally-generated and utility-priced electrical power. What analogs between the two exist are imperfect and limited. 

But ultimately this isn't about private cloud or public cloud, determined as those with certain agendas are to champion the eventual triumph of one approach or another. It's about a hybrid future in which some organizations and some workloads will use primarily private infrastructures and others will use public ones. And that means the best cloud management approaches will be those that maintain flexibility by being open and by being hybrid.

Links for 06-18-2012

Friday, June 15, 2012

Podcast: OpenShift Platform-as-a-Service: What's needed for the enterprise

Hosted Platform-as-a-Service delivers a lot of benefits to developers. But it doesn't always meet the operational demands of enterprise IT. Red Hat's Dan Juengst and Joe Fernandes talk PaaS operational models and why Polyglot PaaS is important. Among the topics covered are:
  • Why PaaS?
  • Polyglot (multi-language/framework) PaaS
  • Portability of applications
  • Red Hat's OpenShift enterprise PaaS strategy
  • DevOps and ITops operational models
  • What's inhibited PaaS enterprise adoption?
Listen to MP3 (0:23:42)
Listen to OGG (0:23:42)

[Transcript]


Gordon Haff:  You're listening to the Cloudy Chat Podcast with Gordon Haff. Hi, everyone. This is Gordon Haff. I'm cloud evangelist with Red Hat. Today I'm actually sitting here with one. And on the phone with another of our Red Hat product marketing managers in the Cloud Business Unit who work with Red Hat platform‑as‑a‑service. Sitting here with me is Joe Fernandes, senior product marketing manager, and on the phone I have Dan Juengst, PaaS strategist. Welcome, Joe and Dan.
Dan, let's start with you. You're involved with our OpenShift platform‑as‑a‑service, currently hosted platform, in developer preview. Can you maybe start talking a little bit about what is OpenShift and what it's for, who's using it, that kind of thing?

Dan Juengst:  Certainly, Gordon. Thank you. Yes. OpenShift is, as you described, a cloud‑computing platform‑as‑a‑service offering, or PaaS as we say. It's being offered by Red Hat, and it is today in developer preview. It will be going live in production mode a little bit later this year. Platform‑as‑a‑service is one of the three canonical delivery models for cloud computing. Whereas infrastructure‑as‑a‑service really provides just pure compete resources running in the cloud, and software‑as‑a‑service provides a fully baked application running in the cloud, platform‑as‑a‑service is designed to provide developers and enterprises with a complete application development and execution platform running in the cloud. The goal is to allow developers to very easily build their own applications and run them in the cloud and take advantage of all the cloud advantages, such as scalability, resource pooling, elasticity, on‑demand access, and so forth.

Red Hat has built OpenShift as a platform‑as‑a‑service and is running it today. It's built on some really cool industrial‑strength technologies from Red Hat, including Red Hat Enterprise Linux, that gives us some very powerful security and fine‑grain multi‑tenancy capabilities within the platform‑as‑a‑service itself. We've also got JBoss middleware baked into OpenShift, which gives us a full suite of Java EE capabilities, so people can build real enterprise‑class applications and run them in the cloud.

Gordon:  Applications that you build for OpenShift can be deployed elsewhere.

Dan:  That's a great point, and that's correct, yes. Red Hat OpenShift is built on open‑source technologies and, really, the out‑of‑the‑box flavors of those technologies and the languages so that any application that you write and run on OpenShift, whether it's a Java application running in JBoss or a Ruby application or a node.js application, can be pulled out and run locally on your data center or run on another cloud platform. In addition, OpenShift is a polyglot, or multi‑language, platform‑as‑a‑service. It supports Java and Ruby and Python, as well as node.js and PHP and Perl. This gives developers the flexibility to choose the language that they want to code in to write their applications.

Gordon:  That's actually probably a pretty good segue to what I'd like to talk about next. Joe, there was a recent OpenShift strategy announcement, and that really starts to get into how you can take applications that could be developed on a public‑hosted platform, such as the current OpenShift. However, that could be run in a PaaS or other ways that's more suitable for certain enterprises.

Joe Fernandes:  Since the introduction of the OpenShift platform‑as‑a‑service just over a year ago, we've really seen tremendous interest in our Red Hat Enterprise customer base, in both when OpenShift would be available as a commercial public offering as well as when we'd have support for an on‑premise solution for deployment in private and hybrid clouds. The announcement we made last week [May 9, 2012] really outlines our strategy and road map for delivering on both of these requests. 

We put out our Red Hat Enterprise PaaS strategy for OpenShift and really detailed a few key points. One is why we felt OpenShift is the best cloud application platform for enterprise developers and enterprise application needs. We also talked about our intent to make OpenShift available as both a commercial‑hosted as well as an on‑premise deployable solution and our plans to support different enterprise PaaS operational models to meet the needs of our enterprise customers.

Gordon:  I think that probably talks to the fact that the things that make a hosted PaaS, again, like the current hosted OpenShift...Really interesting are the fact that developers can just focus on the application. They don't need to worry about the underlying operating system and other infrastructure. Really applies to a lot of enterprise developers too. Application developers in the enterprise are not down there mucking around with operating systems, or at least they don't want to.

Joe:  That's absolutely right. We think that the appeal of platform‑as‑a‑service is universal to all developers. It's just that enterprise developers, they do have different needs as well as different constraints, which we can get into. But from the perspective of the developers, it doesn't really matter who the PaaS provider is. 

A lot of the current stuff that's out there around PaaS, a lot of the current offerings, are really focused around a public PaaS service, which we think is great, and we think we have one of the best ones in OpenShift. But, again, from the perspective of the enterprise developer, if their organization is only able to deploy something on‑premise or prefers a hybrid cloud model, they don't care if their provider is Red Hat or if their provider is the internal IT organization within their company. They just want to be able to get things done quickly, as we've described.

Gordon:  Joe, you probably have started touching on some of the reasons. But Dan, what are some of the factors that you think PaaS hasn't been adopted more widely in the enterprise so far?

Dan:  Right. We see that there certainly are some challenges. As Joe alluded to, a lot of our customers are asking us about PaaS. The funny thing is, when we ask them, "What do you mean by platform‑as‑a‑service, and how would it work in your environment?" they all have different answers. Enterprises all have different constraints and different operations and methodologies that they have to conform to within their own business. IT architects in particular have to worry about a lot of things that can impact how a PaaS is used and how it's accessed, so things around security and data privacy and governance. There's certainly a lot of compliance issues for large enterprises, whether it's HIPAA or SOX or PCI. 

PaaSes, when they're really used in an enterprise organization in an IT‑operations sense, they need to account for these constraints that the enterprise is going to have. The PaaS needs to be flexible in terms of how it can be deployed and how it can be operated to be able to really be used effectively by an enterprise. That includes whether it's deployed just on‑premise or just in the public cloud, or maybe in a hybrid scenario where it's a hybrid type of cloud taking advantage of both the elasticity of the public cloud and the data privacy and security of the private cloud.

Gordon:  That seems to be one of the really distinguishing factors of this next generation of PaaS from the first generation. The first‑generation philosophy seemed to be "You can have it in any color you want as long as it's black. But it's going to be really cool and good, so you're not going to care about being limited to that single platform." But the reality is that doesn't seem to be how users think, for the most part. Joe, OpenShift. We've been throwing that word around. Why is that platform, from your view at Red Hat, so important to enterprise adoption?

Joe:  The OpenShift platform is really fundamental to everything we do. When we say "the OpenShift platform," what we're talking about is the collection of both Red Hat as well as open‑source technology that really powers OpenShift. As Dan mentioned, this starts with a secure and scalable multi‑tenant operating system which is built on Red Hat Enterprise Linux, and it leverages technologies such as cgroups and SELinux to really deliver on secure, scalable multi‑tenancy, which is really important for us in terms of being able to manage all of these various applications while ensuring that we can scale and secure individual applications from one another. 

Then it also includes enterprise‑class middleware services, which are built on top of JBoss, and specifically the latest version of the JBoss Application Server. This provides support for Java EE 6, but also gives a very lightweight, blazing‑fast, and very scalable container on which to build enterprise applications. As folks know, JBoss also includes enterprise‑class services around messaging, transactions, caching, and so forth that are very critical for building true enterprise‑class applications. We're now offering that in our OpenShift PaaS.

Dan already mentioned this, but OpenShift supports not only Java but a number of languages and frameworks, including Java, Ruby, PHP, Python, Perl, and node.js. Again, we're supporting both modern, scripting‑driven languages and frameworks as well as enterprise Java and different frameworks for Java, like Spring for example.

We also include additional platform services. We have both SQL and NoSQL data services as well as mobile‑application frameworks and other services that come from our open‑source ecosystem. We're working with partners like 10gen, who are basically providing NoSQL services through MongoDB. We're also working with partners like Appcelerator, which are providing mobile‑application frameworks and services to build mobile applications on OpenShift.

In addition to that, we're also providing life‑cycle development tools. We've integrated things like Jenkins and Git and Maven and so forth, as well as Eclipse‑based tooling and JBoss Developer Studio integration. Again, if you're a developer, not only do you have your choice of languages and frameworks, not only do you have a rich collection of services on which to build your application, but you also have a rich tooling that allows you to build that application, and in different ways. You can build that through your Eclipse IDE, you can build that through a command‑line interface if that's how you prefer to code, or you can use our rich web interface to upload and work with your application code.

Again, we really feel that this is fundamental to OpenShift is all of this great technology that we've brought together into the OpenShift platform.

Gordon:  That's the development side of things, if you would: the tooling and the frameworks and the way you can write applications easily. But the other side of the fence, if you would, is operations. Certainly, there's a lot of this talk about DevOps, this idea of the coming together of IT‑operations concerns and developer concerns. This is actually happening in some circles. Actually, I would probably argue in all circles there is certainly a breaking down of barriers, however the ultimate operational model is carried out. But, Dan, I wonder if maybe you could talk a little bit about what DevOps is, what it means, and really how some of the other operational models associated with enterprises intersect with that.

Dan:  Yeah, certainly. We're going to take that awesome OpenShift platform that Joe described, a little cool set of technologies, and we're going to make it available to enterprises. But as I mentioned before, we also recognize that there's constraints the enterprises have to work in. Certainly, DevOps is a hot notion and a hot topic. It's almost the holy grail for some enterprises, not all. But DevOps really gives you an automated way to really streamline the application life cycle. DevOps makes it much easier for developers to create and deploy applications because they have that control.

DevOps is essentially running a platform as a service and giving the developers access to anything that they want. That's one model that we're looking at and we're going to be delivering for the OpenShift platform, give that ease of use and that agility and that power to the developers.

But we also, as I said, recognize that enterprises have other constraints and there may be some situations, whether it's compliance or governance or audit‑related, that they need to have more control being held by the IT organization.

We're also going to deliver the OpenShift platform in an ITOps fashion. ITOps is a phrase that we're using to talk about a delivery model for platform as a service, which gives more control to the IT organization and allows them to very carefully specify what's available to the developers to create application templates the developers can then deploy on, and to just be able to have more control and more logging and more governance of the platform as a service itself.

ITOps, the model for OpenShift PaaS will be something that some enterprises will adopt and be able to utilize. Then others will move on to the DevOps model to take advantage of the speed and agility.
In addition, we're going to provide two other models, which is one we already have today, the hosted public cloud version of OpenShift. There are certainly some applications and some enterprises that are interested and willing to run their applications in the public cloud already. They can take advantage of the hosted service, which is up and operational today, and work with it in a very Agile and DevOps type fashion, just in the public cloud.

Then the fourth model is something that we feel is also important. That's being able to allow developers to work on a PaaS when they're not connected to the Internet, when they're on an airplane. We call this the offline model.

What this will be is really essentially a PaaS running on their laptop, or a bitwise compatible version of the PaaS running on their laptop. This offline edition, we're calling it, is something we've made available already through the open sourcing of the OpenShift product.

Developers are able to download a virtual machine image and run a PaaS on their laptop. That gives them access when they're not connected. The hosted model gives them access to run applications in the public cloud. The ITOps and the DevOps models give them access to run applications on‑premise or in a hybrid model as well.

Gordon:  Do you have anything to add to that?

Joe:  Yeah. I think the conversation that Dan had around DevOps and ITOps, that's a conversation that we've been having with a number of our customers as well as analysts and other folks in the industry. Again, when folks are asking for PaaS and intrigued by this DevOps model, it really is important that we get a common understanding around what this is and so forth. For example, in OpenShift, developers have tremendous control. Not only OpenShift but in any PaaS offering, developers have tremendous control to just upload their code and quickly deploy their applications into production and manage those independently of anything else that's going on, any other applications and so forth.

That's great. It provides a lot of agility and a lot of flexibility for those developers. But it's not really how things work in a typical enterprise. In an enterprise as Dan mentioned, IT really retains a lot more control. You have both system administrators and application administrators that are retaining a lot more control over the deployment and management of enterprise applications for various reasons.

These organizations really want the benefits of PaaS and a lot of the benefits of a DevOps‑like model, but they really need to work within their more traditional IT management practices that they've put in place.

That's really what this ITOps model is all about, is providing the benefits of PaaS but in a way that is more consumable for our enterprise customers.

Gordon:  I think there is a point here though that is worth emphasizing. When we talk about IT control in ITOps, we're talking about the IT architects, the operators maintaining control of key compliance, key governance. But we're not talking about going back to a model where you fill out a form and wait three weeks to get access to some resources, either.

Joe:  No, absolutely not. I think, again, the key thing here is being able to deliver on the agility and the flexibility that PaaS provides. We're talking about self‑service deployment for those application developers. We're talking about a standard catalog of applications and platform services that they have access to, and we're talking about being able to essentially get IT out of the way when it comes to getting these applications built. We just have to recognize that there are constraints that enterprise organizations have that developers can't always work around. There are compliance considerations, security considerations, different practices that they have in place that are important, and sometimes restrict what's in the best interest of those developers and so forth.

Finding a model that can work for the enterprise developers while still addressing the needs of these enterprise architects and IT operations teams is really important for getting PaaS adoption in the enterprise.

Gordon:  But really, whatever the operational model that developers have speed and agility, they might just not have quite as much freedom to pick their PHP library of choice, for example, under some conditions.

Joe:  Exactly. We think this is going to evolve. We think that over time more enterprises may move towards a pure DevOps model. Or within some enterprises they may have the need for both, depending on the nature of the application or where it is in the life cycle.

Gordon:  Great. Red Hat talks about open hybrid cloud strategies. A lot of that really has been maybe a little more focused today on building an infrastructure as a service cloud. Where does OpenShift, where does platform as a service fit within that strategy?

Joe:  Yeah. We think that this fits right into what we've been talking about here at Red Hat around the importance of open hybrid clouds. When you think about it, open hybrid clouds is really about helping enterprises achieve a lot of the efficiencies and agility of the cloud and what we've seen reported by public cloud providers. Things like the costs to run their compute, the management costs and so forth to manage the infrastructure and applications and agility around being able to deploy and develop applications very quickly.

We're trying to bring a lot of those benefits of the cloud to enterprises, but while still addressing the realities of enterprise IT. Enterprise IT organizations aren't in the same situation as public cloud providers. They have legacy applications. They have complex heterogeneous infrastructure mix.
While a lot of vendors are out there promoting either a cloud‑in‑a‑box type of a solution, where they're just going to roll something in and stand up a cloud, how is that going to work in an environment where you have to deal with all of these legacy apps and infrastructure and these enterprise concerns?
Other vendors are talking about building essentially what we view as cloud silos, which is basically building clouds on top of a subset of the infrastructure, whether it's an existing virtualization stack or what have you. Again, we don't think that that works because it's not going to bring the benefits of the cloud to the full cross‑section of that organization's infrastructure.

Again, we're promoting an open hybrid cloud approach. Really, not only open source but open standards and open practices and so forth. We have a lot of information out there on that. Also, a hybrid approach that makes the best use of not only everything that's available within the enterprise architecture.

Also, it makes best use of what's available out there in the public cloud, again, doing that in an open way so that you're not limited to a single virtualization provider or a single public cloud provider, or a single slice of your infrastructure or applications and so forth.

That's how this fits in. We think our PaaS strategy is well aligned with what we've been talking about in general around open hybrid clouds.

Gordon:  All right. Thanks, Joe. We've talked about a lot of things here. We've talked about OpenShift. We've talked about OpenShift open sourcing and the recent strategy announcement. We've talked about some of the limiting factors of PaaS adoptions in the enterprise. We've talked about the different operational models that we're moving towards for PaaS in the enterprise with OpenShift. I know both of you have been pretty busy recently. What's coming next, Joe?

Joe:  Again, we just put out this strategy announcement last week. We had a great webcast, which we encourage folks to view. But what's coming up next for us is the Red Hat Summit, which is coming up in the last week of June. This is a Red Hat Summit and JBossWorld. This is the coming together of Red Hat's customer base. We'll be making a number of announcements there around our cloud strategy and our cloud roadmap, including more detail around some of these PaaS solutions that Dan and I outlined. We'll be talking about what's coming up in terms of specific bundled solutions as well as giving more details around our road map, which we're very excited about.

After that, we'll be delivering on some of these things that we discussed, including commercial offering for our OpenShift public PaaS service, including on‑premise solutions in various forms, some more geared towards an ITOps operational model, some new capabilities around providing a DevOps experience within the enterprise.

We're really excited about that. For folks who are already coming to the Red Hat Summit, we encourage you to attend those sessions. If you're not, you can still register. We look forward to seeing you there.

Gordon:  Great. Well, thanks to both of you. Anything to add?

Joe:  No, I think that's it. Thanks a lot, Gordon.

Dan:  Yeah, thank you for your time, Gordon.

Gordon:  Thanks, Dan and Joe. Take care. Thank you, everyone, for listening. Bye‑bye.

Thursday, June 14, 2012

Links for 06-14-2012

Wednesday, June 13, 2012

Podcast: Red Hat's Matt Hicks talks multi-tenancy in PaaS


Efficient and secure multi-tenancy is one of the big operational challenges in Platform-as-a-Service (PaaS) environments. Principal Architect Matt Hicks describes some of the key tools used by Red Hat to operate its OpenShift PaaS including SELinux. Matt covers:
  • What multi-tenancy is
  • Why virtual machines by themselves aren't sufficient
  • The important benefits that SELinux can deliver
  • Best practices for PaaS operations
Listen to MP3 (0:12:03)
Listen to OGG (0:12:03)

[TRANSCRIPT]


Gordon Haff:  You're listening to the Cloudy Chat Podcast with Gordon Haff. Hello, everyone. This is Gordon Haff, cloud evangelist with Red Hat. I'm sitting here with Principal Architect Matt Hicks on our OpenShift platform. Welcome, Matt.

Matt Hicks:  Thanks, Gordon.

Gordon:  So, Matt, we're going to talk about multi‑tenancy today. I suppose, if we're going to talk about multi‑tenancy, it would be good to start talking about, what is multi‑tenancy? What do we mean by it?

Matt:  Multi‑tenancy, it's a tough term because it's fairly abstract. I think, for this conversation, when we talk about multi‑tenancy, it's good to frame it. My definition would be being able to run multiple workloads on the same instance of an operating system. That operating system might be a virtual instance, it might be a bare‑metal instance, but multi‑tenancy being that you can run these workloads, they're segmented from each other, they're secure, they can't access each other's data, they can't access the other processes, and they each have somewhat of a feeling of they own the entire machine.

Gordon:  Let's drill down on that security aspect a little bit, because that's been getting quite a bit of attention recently. For that matter, Larry Ellison, Oracle, very recently essentially made the statement that the only real way to isolate workloads was by using virtual machines.

Matt:  Yeah, I saw that. I think traditional logic is we know operating‑system segmentation really well. Virtual machines are an important layer. They provide a great means of essentially separate operating systems, since sysadmins know how to segment them. In one aspect, it's somewhat of a true statement, because VMs are great at segmentation. 

The challenge with VMs is, especially in the PaaS space, our density requirements, the amount of stuff that you have to run, is extremely high, and the cost pressure to get your costs low is very intense as well. A virtual machine carries a lot of operational costs for doing that segmentation. You have sysadmins that are putting up firewall rules and putting them in separate networks, and they have to be patched and updated. If you run a workload per VM, it's very secure, it's very well segmented, but it'll probably be very expensive in a PaaS model.

When we look at multi‑tenancy, one of the things that worries me about multi‑tenancy is the people that just run traditional, Unix‑style segmentation. They take a VM, they run a bunch of processes on it, and then they basically pray that permissions and everything are set right and there is security between them. That's what we tend to call discretionary access control; you'll see the acronym, DAC. 

Discretionary access control requires that you're essentially perfect. You have all the permissions right. You have all the users properly segmented. The machine is always patched. There are no backdoors for somebody to get from one app to the other.

I think that's very risky. We see a lot of that in the market. That's what people are doing for multi‑tenancy. I think that's a security problem just waiting to happen.

Luckily there's a very industry‑standard way of solving this. That's moving from discretionary access control to mandatory access control with SELinux. The power of doing that, it's like moving from a blacklist model, where you have to say all the things that aren't allowed. SELinux moves stuff to more of a whitelist model, where you list the things that are allowed on those machines, and it brings with it a tremendous amount of security in a multi‑tenant space.

In PaaS, we know what applications are doing. It's a very effective thing for us to be able to list the actions that they should take and then block everything else. I think, with SELinux, there's a ton of security and segmentation ability with normal multi‑tenancy. You can get the best of both worlds there.

Gordon:  Organizations like the National Security Agency have been involved in the development of SELinux, so some pretty high‑security people have had a big hand in this.

Matt:  Yeah, absolutely. It's becoming best practice across the board. Even if you're using virtualization, you want your hypervisors and those things controlled by SELinux because it is that good at helping to avoid exploits. Combining that with the power of being able to segment Unix processes, it's a great combination. You get the density benefits of avoiding VM sprawl. You have a smaller list of VMs that you have to carry that operational cost of updating and maintaining them on, and you can carry a wide variety of workloads within those VMs and get a tremendous amount of segmentation between them just with SELinux. It's not new, like new stuff that's having to be built. It's really using the capabilities that's in the Linux operating system.

Gordon:  Matt, I want to ask a question very specifically related to platform‑as‑a‑service, since that's what you're involved in the operations of. Nobody should take this to be a statement about all of enterprise IT or all of cloud, but what are the best practices for isolation that you're seeing in the platform‑as‑a‑service space?

Matt:  In the platform‑as‑a‑service space, we're really seeing multi‑tenancy as sort of an evolving standard in that space. The way it's achieved is very different, but the major players, from Google to Heroku to VMware's CloudFoundry, are all using process segmentation, to one degree or another, to achieve the density that's required in PaaS. I think what we'll see going forward is, when you're in the PaaS space, the demands of being able to segment based on multi‑tenancy are going to be the standard. I think that the techniques right now are different across the board. Some people fork the frameworks themselves to take out the insecure things. Some people are just using technologies like LXC with nothing else. Our view is we use basically every tool in the toolkit plus SELinux to be able to have the most secure option. I think that will still evolve a little bit, but I think it's pretty safe to say that multi‑tenancy in this space is probably here to stay.

Gordon:  Because in the PaaS space, you really aren't thinking about the operating system as a user or as a developer. Unlike the infrastructure‑as‑a‑service space, for example, the virtual machine isn't an obvious construct that you care about.

Matt:  Right. In the PaaS space users interact with components of the operating system, but it's pretty well accepted that you don't have control of the full machine. You might need to get access to ports, but you don't have every port on the system. You might need to get access to HTTP routing, but you don't own the actual top‑level Apache instance. I think that's been pretty well established in the market. That benefit of limiting the use case lets us make multi‑tenancy much more powerful. If we didn't have any limits, we'd have to give each user their own virtual machine because they would expect to be able to control everything on it.

Gordon:  At that point, we're talking infrastructure‑as‑a‑service.

Matt:  Absolutely.

Gordon:  You mentioned LXC. What is that?

Matt:  LXC is actually a project that is focused on segmentation, to some extent, and workload management. It's a combination of a lot of different technologies that are in Linux. LXC is the name of the project, and it uses technologies like Linux control groups to help segment the processes themselves, put them in different groups, so users don't see each other's processes. It uses things like kernel namespaces and technology bind mounting, to be able to make parts of the file system appear like you own them‑‑for example, giving each user, they see their own temp directory instead of seeing a big shared one. LXC has been this conglomerate of segmentation technologies. The one challenge with LXC that I don't think a lot of people realize today is that it does not include the SELinux layer. In my presentations a lot of times, I compare it to Japanese walls security, where it's very nice. It's a nice privacy segmentation [but not security isolation].

Gordon:  One of the things I find interesting about this discussion is that, historically on Unix systems, there was a wide range of isolation techniques and, essentially, this idea of this trade‑off between physical separation on the one hand and, basically, Unix process control on the other hand, with a whole range of continuum in between. By the way, virtual machines were somewhere in the middle of that continuum. They were not historically the be‑all to end‑all for maximum isolation. Then it seemed that people were going down this road of "everything's going to be VMs." Now, with platform‑as‑a‑service, with PaaS, and with other new types of operational models, we seem to be coming back to this realization that, different horses for different courses. There is no one‑size‑fits‑all.

Matt:  I agree completely. I think virtual machine has a great role in being able to provide segmentation. But just like you said, all of the traditional hosting techniques that were used 20 years ago to segment stuff are still being used by us today, plus this newer generation of tooling, like Linux control groups and SELinux and kernel namespaces which help bring that continuum a little bit closer to, "We do traditional process segmentation," but we're able to give users a lot more control than they would typically get. They don't have full control of the machine, like they would in a VM, but it helps us strike the balance a little bit better and let users have a lot of ability even though they're in this sandbox on the machine.

Gordon:  Ultimately, any of this is about striking a balance. If you want perfect isolation, you run your workload on a single physical server locked in a vault that's disconnected from the network. The problem is that's not going to be very useful under most circumstances.

Matt:  Absolutely.

Gordon:  Anything else you'd like to add, Matt?

Matt:  I think it's an exciting space to watch. One of the things I love about PaaS is the demands of things like density are really driving this resurgence in tools that have been around, in some cases, for a couple decades. I think it's an exciting space to see the combination of those tools with newer technologies being brought together. It makes that spectrum a lot more powerful, whether physical hardware is what you need for your use case, or whether you can do it with purely virtual machines, or whether you have the need to start packing density in controlled use cases and go more down the LXC and SELinux‑type model. It's great. I have more fun with Linux these days than in a long time.

Gordon:  You're talking about something we see a lot of in this industry. The basic concepts have been around forever, but they get re‑imagined, and with new technologies they can be put to new uses.

Matt:  Absolutely.

Gordon:  Great, Matt. Thank you.

Matt:  Thanks a lot, Gordon.

Gordon:  Bye‑bye.

Friday, June 08, 2012

Links for 06-08-2012

Podcast: Red Hat's Chris Morgan talks about public cloud providers

Whether running in a pure public cloud environment or a hybrid approach that spans both public and on-premise resources, applications still depend on many of the attributes of an enterprise operating system to run reliably and securely. Red Hat's Chris Morgan discusses how the Red Hat Certified Cloud Provider Program makes this possible:
  • How both cloud providers and end user organizations benefit
  • How Cloud Access works
  • How consistency is provided across a hybrid environment and the benefits it brings
Listen to MP3 (0:10:22)
Listen to OGG (0:10:22)

Transcript


Gordon Haff:  You're listening to the Cloudy Chat Podcast with Gordon Haff. Hi, everyone. This is Gordon Haff, cloud evangelist with Red Hat, and I'm sitting here with Chris Morgan, who's the senior product manager for Red Hat's cloud ecosystem. Welcome, Chris.

Chris Morgan:  Thanks, Gordon.

Gordon:  Let me start off with a pretty basic question. You work with a lot of public cloud providers, talked to lots of them, logged lots of frequent‑flier miles talking to them. Why, fundamentally, do people care about running Red Hat Enterprise Linux and other certified Red Hat products in the cloud?

Chris:  If you're the provider, for them, they see it as a two‑horse race in the operating system world. You've got, obviously, Microsoft Windows and you've got Linux. And for Linux, it's Red Hat Enterprise Linux. They know that will bring more consumption to their cloud. Because fundamentally, if you're a provider and the way we've been seeing providers in the ecosystem, they're the next generation of OEMs, in many respects. Just like classically, we always had to certify Red Hat Enterprise Linux on the major hardware providers‑‑your Dells, your IBMs, your HPs‑‑we're having to do the same thing with these cloud providers, because, again, people expect things to just work when they go to these environments and want to run Red Hat Enterprise Linux and our other Red Hat products.

Gordon:  It's really some of the same reasons that so many people run RHEL in an on‑premise environment. They also want to get those same attributes in a public cloud environment.

Chris:  Absolutely. It's more than just the technology. You've got the business and operational models that need to go in place there as well. How do you get updates, for example? How do you manage entitlements and things of that nature? Those are technical problems that roll into business and operational problems that we've been working to address as part of the program, and actually have addressed in certain situations.

Gordon:  I'll probably get into a little more detail in some of those technical aspects in a few minutes, but I'd like to stay on this business and consumption angle for right now. What, fundamentally, needs to happen in order to make RHEL consumable in the cloud from the point of view of the customer?

Chris:  I'll answer your question by breaking the customer down as well. What we're seeing a lot of are, what I would consider your grassroots, or the next Facebook folks. These developers that don't have a lot of startup capital. They're trying to do the next thing. They go to the public clouds and are looking for a platform that they've heard of and can trust and start consuming. Then, we've also, on the other side of the spectrum, got our existing enterprise customers. What I think the former group is looking for is something that they can just get started with. I've been quoted as saying, internally here at Red Hat, I would like for RHEL in a cloud to be as easily consumed and picked up as CentOS. In other words, if you see Shadowman in the same list as the free alternatives, I would not want any kind of inhibitor there for you to just start using it. That would imply, it's automatically entitled, immediate access to updates, knowing you could get support on it if you needed to, which is something you couldn't get with the free versions.

Then, for the enterprise customers, well, a lot of them have a significant investment with us, and they like having that direct relationship with us and wanting to continue that whether they're running their application on‑premise or in a public cloud.

It's really those business models that we've had to help develop. For the latter, with the enterprise, we have a concept called Cloud Access, which is a bring‑your‑own‑subscription model. Today that works at Amazon, but we are actively rolling it out to other certified cloud providers, because it's been pretty high‑demand so far and so we want to keep expanding on that.

Gordon:  Basically, moving their subscription. They buy their subscription once and then run it wherever they want to.

Chris:  Absolutely. That goes back to my earlier comment about next‑generation OEMs. If you buy a subscription from Red Hat, well, you can run it on a Dell server, you can run it on an HP server, and you can run it on anything that's been certified. If you look at the public clouds as just an extension of that, for us, from a business standpoint, Cloud Access is the next logical step.

Gordon:  It's maybe been implied in our conversation, but we have tended to be talking about what's happening on‑premise, what's happening in a public cloud? Maybe not that many folks do a true hybrid thing today, but really, the ultimate goal here is they've got their workload, they will develop it once, they will test it once, they will certify it once, and then they just want to be able to run it wherever it makes sense for them to do so, at a given point in time and at a given point in the application's life cycle, without having to redo everything.

Chris:  Absolutely. You heard me mention splitting up the customers more. Just like you have, I know I've spoken to literally hundreds of customers, and it's both sides of that spectrum. The folks that are grassroots, that's less of a problem. They care about it, but they're doing some pure development in a public space, and so they've actually architected their applications that way. What I'm seeing with the enterprise customers is it's many times requiring a re‑architecture, and to your point, they only want to do that re‑architecture once and then just run it wherever they would like to. It absolutely is very fundamental to have that consistency. That's the key word to the program is consistency for not only the technologies underneath, but where they can get support, how they're actually going to pay for this, and have that across the board.

Gordon:  One of the interesting things about RHEL is it has its foot in both those camps, so to speak. On one hand, it provides, at some level, this traditional certified enterprise‑grade operating system. But, then it also has, for example, all the multitenancy features like control groups, for example, security features, like SELinux, kernel based virtualization with KVM that lets people architect their applications with multiple types of multitenancy, which is what you need for these new style cloud apps.

Chris:  Well, absolutely. It's very interesting that you bring the multitenancy. Another key attribute of the certified cloud provider program, again, it's not just a pure OEM. It's next generation OEMs so ensuring that there's multitenancy. Take for example Red Hat Enterprise Linux, if you go to a certified public cloud and you start up RHEL, this provider's already gone through a set of steps to ensure this is a clean image because it is a multitenant environment. The last thing you'd want to do is accidentally start up something, and you see someone else's data in there. Checking off a lot of those operational pieces as well is a big part of it. Red Hat, as you mentioned, having all those attributes already natively in the OS extends that accountability and expectation from the market for us.

Gordon:  Just briefly, we've been talking a lot about the business relationships and this idea of providing service in a safe known environment. What are some of the things that we're doing technically with the public cloud providers in our certified cloud provider program?

Chris:  Sure. I've mentioned certification. It's not unlike what we do with the hardware certification. From that standpoint, will RHEL run in that environment? Can you start it? Can you enable and disable SELinux just like you can on premise? Does it do all the drivers that are needed to work? One advantage that the public clouds have is a keyboard driver, it's kind of worthless. You don't need things like you would on premise. One of the bigger things we've done is public clouds, in particular, when it's not the cloud access model I mentioned, you bring your own subscription, but more addressing the grassroots, we have no direct or even indirect relationship with our consumer.

We really need to ensure the integrity of the subscription in those situations. The key technology we've added is all of the certified cloud providers are using essentially an in‑cloud update service where when you start these images, they're automatically entitled to where they can access this update service immediately, and get all the additional RHEL packages they may need and plus getting critical or rather other updates that they may need on the fly.

That was a really big step for our company because for the last 10 or so years, we've created a methodology where when you start RHEL, you have to register it with an RHN, Red Hat Network, or with an RHN satellite on premise before it's actually enabled. Now, we are on the same playing field, if you will, as CentOS where you start Red Hat Enterprise Linux, it's just ready to go.

There's nothing to keep you from using it. You don't have to worry about contacting sales. All of that stuff is handled on the back end. It really becomes a seamless environment from that perspective.

Gordon:  It's still a certified operating system.

Chris:  Absolutely, it's consistent and it's known. You can pick up and you can speak to the provider and say, "Hey, I'm doing this on Red Hat Enterprise Linux," and they'll absolutely be able to help you, and even if something got misrouted somehow and it ends up going directly to us and it's something that you got directly from the provider. Well, our guys are going to be trained, "Oh, yeah, we know about this cloud, and we can help you." All of these are pieces to the program.

Gordon:  Well, great. Thanks, Chris. Anything else you'd like to share?

Chris:  No, thank you for your time. I'm really excited about continuing to make this pervasive, especially the cloud access pieces, so please look in the coming months for some other things from Red Hat related to that.

Gordon:  Well, thank you, Chris, and thank you, everyone.