Thursday, January 23, 2014

Links for 01-23-2014

Tuesday, January 21, 2014

Links for 01-21-2014

Thursday, January 16, 2014

Links for 01-16-2014

Wednesday, January 15, 2014

Links for 01-15-2014

Monday, January 13, 2014

What Red Hat's up to with partners and AWS Test Drive

The AWS Test Drive Program is a way to easily try out enterprise software. To quote AWS:

Test Drive simplifies clients’ access to complex IT environments, using the programmable infrastructures of AWS. Test Drive enables customers to rapidly deploy a private sandbox environment containing pre-configured ISV server applications that are ready to Demo and use. Test Drive labs are provided from the APN partner ecosystem, providing rapid provisioning of private IT environments. In a few minutes you can login and start using the software, following a guided tour Video and Lab Manual.

Test Drives labs have been developed by our APN Consulting and Technology partners and are provided free of charge for educational and/or demonstrational usage. Each Test Drive includes around a half a day’s use of free AWS server time for using live enterprise solution stacks, from the industry’s leading ISVs and SIs. You can return here and try any or all of the Test Drives at any time, so feel free to experiment, explore and learn.

The basic idea behind Test Drives is that you can get free limited-time access to complex enterprise software and work through a scripted use case to evaluate that software quickly. Software can then be purchased through AWS Marketplace or other channels. AWS Test Drive is a fairly new program. It was rolled out relatively quietly last year. 

Quoting Red Hat North America Channel Sales Senior Director Bob Wilson, The VAR Guy writes: "The test drives lend themselves to complex solutions, so partners that have offerings that require multiple steps, components and complexity to display to an end customers are optimal candidates," he said. "However, any solution-based offering that solves a specific customer issue would make a good test drive."

Red Hat's announcement today is for new test drives with three of our largest North American Partners. Mark Enzweiler, Red Hat's senior vice president, Global Channel Sales:

We've enjoyed working with CITYTECH, Shadow-Soft, and Vizuri to develop these initial solutions, and are eager to develop additional Test Drives with other partners. We believe these Test Drives are invaluable, they enable partners to use their expertise in pulling together complete solutions to solve complex customer challenges and illustrate how easily customers can use these tools to migrate to the cloud.


Punting on the Cam

Punting on the Cam by ghaff
Punting on the Cam, a photo by ghaff on Flickr.

This is from my trip to the UK last fall. I plan to be in London and Belgium in a few weeks for at least some of the end of Jan/beginning of Feb open source extravaganza set of events.

Links for 01-13-2014

Friday, January 10, 2014

Why you need a cloud management platform

I put this post up on the Red Hat OpenStack blog. If you haven't checked this blog out, give it a look and consider subscribing. 

Cloud infrastructure and cloud management. As an industry, we conflate these two things far too often.

This is understandable up to a point. Cloud computing architectures are relatively new and new architectural approaches often involve figuring out how functions are best partitioned and how they relate to each other. The process tends to be pragmatic; that’s how the networking stack first developed. That terminology is often morphing and inconsistently applied (innocently or otherwise) doesn’t help matters.

The overall building blocks of the private and hybrid cloud stack have now crystallized to a significant degree. The boundaries of these blocks aren’t hard-edged of course; there’s always overlap in the management space given that basic functions tend to come built-in even if they’re superseded at scale or for more complex requirements. But we’re at a point where we can describe the relationship of a cloud platform such as OpenStack to cloud management platforms (CMP)s like CloudForms that shouldn’t be too controversial.

Continue reading at Red Hat Stack

Thursday, January 09, 2014

Links for 01-09-2014

Fishing boat in fog

Fog by ghaff
Fog, a photo by ghaff on Flickr.

Finally got around to processing my photos from a couple of Maine trips during the summer/fall last night.

Wednesday, January 08, 2014

Will IaaS and PaaS converge?




tl;dr version: No per Betteridge's Law of Headlines (in many cases). But if you want a more nuanced take on this question, you'll need to read on.

Some history

The definitions that we use for the layers of cloud computing today--Infrastructure-as-a-Service (IaaS), Platform-as-a-Service(PaaS), and Software-a-Service(SaaS)--are enshrined in a remarkably thin document, NIST Special Publication 800-145, which wasn't finalized until October 2011 by which time many aspects of cloud computing were in full swing. However, this publication has been influential nonetheless because it began life as a draft in 2009 and, furthermore, was developed together with a large number of users and vendors. Indeed, NIST noted upon the finalization of the publication that "While just finalized, NIST's working definition of cloud computing has long been the de facto definition."

Here's how NIST defines IaaS, PaaS, and SaaS respectively: 

[IaaS] The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

[PaaS] The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

[SaaS] The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

It's worth noting at this point that the PaaS service model was a relatively late entrant to the discussion. For example, when I wrote a research note that took a crack at defining cloud computing architectures at the beginning of 2008, I only discussed IaaS and SaaS. The on-demand services these provided were clear. IaaS provided compute, storage, networking and related services--server-like things. We already had a working example in Amazon Web Services (AWS)--which was also starting to expand beyond basic infrastructure with SimpleDB. SaaS was at least equally well-understood; it was a Web app. [1]

The point of this history lesson? Two-fold.

First, it's to point out that the widely-accepted NIST cloud computing definition focuses specifically on the level of abstraction presented to a generic consumer. Secondly, it's to show that PaaS is defined, at least in part, as something that sits in between IaaS and SaaS--which were far better understood by way of concrete examples like AWS and Salesforce at the time than was PaaS.

How do IaaS, PaaS, and SaaS relate?

The significance of PaaS filling the space between an IaaS and a SaaS is that it touches both of those abstractions. Although a PaaS like OpenShift by Red Hat can sit on bare metal, it can also take advantage of flexible IaaS infrastructure. I'm not going to get into all the details of how OpenShift might use the OpenStack IaaS, for example, but I'll touch on what some of those integration points are and how they might evolve in a bit.

It's also worth observing here that simply thinking of PaaS as a higher level of abstraction than IaaS for a generic consumer of computing resources of various types misses an important distinction. PaaS presents an abstraction that is primarily of interest to and used by application developers. IaaS can also appeal to developers seeking more options and control of course. But a PaaS like OpenShift focuses on giving developers and/or DevOps the tools they need and then getting out of the way. IaaS is infrastructure--and therefore often more focused on system admins who are supporting developers (whether through a PaaS or otherwise) and other consumers of business services. This will increasingly be the case as IaaS, or something close to it, increasingly becomes how computing infrastructures are built--whether at a cloud provider or in an enterprise.

SaaS also touches the PaaS layer. This interface typically takes the form of what analyst Judith Hurwitz refers to as a PaaS anchored to a SaaS environment. Another way to think about this is that software is increasingly expected to surface APIs so that users can extend and integrate that software as they need to. These APIs and surrounding tooling may constitute a sufficiently rich and extensible environment to be considered a PaaS (as in the case of Salesforce).

The blending of IaaS and PaaS

Given the relationship I've described, it's reasonable to ask whether IaaSs won't just add abstractions until they're PaaSs or whether PaaSs won't just build in the infrastructure they need until they don't need a discrete IaaS layer. 

This will happen in some cases. Azure is an example of a PaaS offering that is a monolithic stack (and which can now also run operating system images as well as .NET applications). A variety of AWS services go beyond the infrastructure layer (databases, Hadoop, Elastic Beanstalk).

However, as discussed above, IaaS and PaaS often address different types of consumers--who may have different types of requirements--so there will likely be benefits  in many cases to having a PaaS that is discrete from (but integrates well with) an IaaS as well as other types of software.

How might this integration work with OpenShift and OpenStack?

OpenShift, like other PaaSs on the market, uses a form of Linux containers. (Red Hat's now collaborating with Docker on containers; Docker is planned for inclusion in Red Hat Enterprise Linux 7). They're lightweight and quick to spin up and spin down. However, to the degree that OpenStack and OpenShift don't talk to each other, neither has any visibility into optimization possibilities. However, as Red Hat's Matt Hicks notes, if a PaaS

is natively integrated into OpenStack, things get really interesting. The containers themselves could be managed in OpenStack, opening up full visibility to the operations team. They wouldn’t just see a virtual machine that is working really hard, they would see exactly why. It would enable them to start using projects like Ceilometer to monitor those resources, Heat to deploy them, etc. In other words they could start leveraging more of OpenStack to do their jobs better.

The OpenStack Solum project is one of the parts that Red Hat (along with a variety of other companies) is working on with an eye to this sort of integration. Solum is intended to meet various needs of developers (integrated support for Git, CI/CD, and IDEs; take advantage of Heat orchestration; etc.) in what you can think of as a PaaS-like way but without all the trappings of a full-fledged PaaS. 

The bottom line here is that there's a continuum between a bare-bones IaaS and a full-fledged development platform. This continuum can be thought of as laying along an axis from complete fine-grained control on one side to various hosted PaaSs on the other. Even this oversimplifies things though as offerings may also differ based on target workloads or other aspects. Which is another reason why a monolithic IaaS+PaaS may not be the best approach.

Finally, as I wrote at the beginning, PaaS is really the youngest of the cloud service models. So it probably shouldn't be surprising that it's evolving so rapidly. (Although all the community energy OpenStack is creating lots of innovation and change there as well at the IaaS layer.) And that evolution will continue--which may well mean that our understanding of the optimal locations for abstractions and interfaces may evolve as well.

Red Hat's cloud portfolio philosophy

Our approach to working on integration points between OpenStack and OpenShift--while leaving customers the ability to use them separately as well--pretty much sums up our philosophy across our entire product portfolio: Red Hat Enterprise Linux, our Red Hat CloudForms and Red Hat Satellite management products, JBoss Middleware, Red Hat storage in addition to OpenStack and OpenShift. Much of this integration work is happening in the upstream communities. You see other examples in the reference architectures created by our system engineering team. (See, for example, Deploying a Highly Available OpenShift Enterprise 1.2 Environment - Using RHEV 3.2 and RHS 2.1.) Openness and flexibility are at the core of our cloud strategy and that applies whether you just want IaaS, just want PaaS, or if you want a well-integrated combination of the two.


[1] I actually used the Hardware-as-a-Service term in that research note, which was being used mostly interchangeably with IaaS at the time. I also discussed the idea of Data-as-a-Service which was primarily about data returned through APIs--an important trend but one that isn't today really directly part of the cloud computing service model. 

Links for 01-08-2014

Butterfly on daylilies

Butterfly on daylilies by ghaff
Butterfly on daylilies, a photo by ghaff on Flickr.

Taken in warmer days in New England.

Tuesday, January 07, 2014

Links for 01-07-2014

Friday, January 03, 2014

Links for 01-03-2014

Thursday, January 02, 2014

Links for 01-02-2014

Podcast: Identity Management and crypto with Red Hat's Ellen Newlands and Matt Smith

Ellen Newlands shares new IdM and cryptography features in Red Hat Enterprise Linux--including the new RHEL 7 beta--while Matt Smith talks about some trends that he's seeing at the customers he speaks with such as the desire to extend enterprise identity into public clouds.

Listen to MP3 (0:11:14)
Listen to OGG (0:11:14)


Gordon Haff:  You're listening to the Cloudy Chat podcast with Gordon Haff.
Hi, everyone. This is Gordon Haff, cloud evangelist with Red Hat, and today I've got two guests here with me. I've got Ellen Newlands, who runs product management for our identity and security products, and I've got Matt Smith, who's a solution architect in the Northeast region with Red Hat. Matt's going to have some great insights about some of the conversations he's having with customers around security and identity.
I'd like to start off first with you, Ellen. What's new?
Ellen Newlands:  Well, I have to say, Gordon, especially with Red Hat Enterprise Linux 7.0 going into beta ‑‑ as you know, we just went into beta with 7.0 at the beginning of this month, December ‑‑ there's a lot that's new in identity management. Many of you may remember that we've included identity management as a feature set in RHEL, which means that it is free with the RHEL subscription. In 7.0, we are bringing out some new functionality that we think is particularly useful.
A lot of customers have Active Directory as what we call their authoritative source for identity in a Windows environment, and yet they'll very often have a very, very large Red Hat Linux deployment, particularly in development or in test. One of their questions is always, "How do I best manage my Linux identities but maintain my capabilities to have Active Directory as the authoritative source for regulatory and compliance purposes?"
Well, in RHEL 7.0, we're shipping something we're calling cross‑realm Kerberos trust. What does that mean? What that actually means is that we have put together a very secure scheme for setting up a trust between Active Directory and what we call the IPA ‑‑ or identity, policy, and audit server, the server piece of identity management in RHEL ‑‑ so that your users in a Windows environment can use their Active Directory credentials and have them passed to an identity server for Linux and then securely and safely reach Linux resources without having to, for example, change one authoritative source for another. In other words, keep your Active Directory, set up a trust with identity management in Linux, and enable your Windows users to access the Linux resources that they would want. We are beginning beta on this now, and we have already had some very good feedback on this functionality.
Now, I did want to mention that, in addition to this, there are customers who do not wish to have any kind of a second domain in Linux, so we have functionality that we call SSSD, which is client functionality that will allow you to connect your individual Linux resources or hosts directly into Active Directory should you prefer. We believe that this gives us a wider reach in today's heterogeneous environment for identity management.
Gordon:  Going to talk a little bit about crypto in a couple minutes, at the risk of having people's heads explode, but for right now, Matt, maybe you can tell us a little bit about what you're seeing out in the field. You spend a lot of time talking to customers, and I'm sure you've got a lot of good insights about what they're seeing out there.
Matt Smith:  Sure, absolutely. Thank you, Gordon.
Really, what we're seeing come forward with the RHEL 7 beta here, with the new features and functionality in IdM, this really addresses some of the calls we're seeing in the field. Customers have a huge investment in Active Directory ‑‑ in the infrastructure they've deployed, in the processes they've developed. As Ellen already described, really being able to bring forward a solution that allows the Linux environments to interact with an Active Directory but have the features and functionality that Red Hat IdM provides, which, beyond just the authentication, also has the management of the access control within that Linux environment, and it gives the Linux admins the ability to interact very, very easily with that IdM environment. Being able to have that integration with an established Active Directory meets a very, very high demand from our customers.
Gordon:  You've been doing this for a while. What are some of the trends that you see out there? What's different today than if I was asking you this question maybe a couple years ago?
Matt:  That's a great question. Really, the newest trend that we're seeing, and really, it's been developing over the past few years ‑‑ how do I extend my enterprise identity into the cloud? As software‑as‑a‑service options are becoming more and more attractive, as platform‑as‑a‑service and infrastructure‑as‑a‑service offerings out in the public cloud become more available, more cost‑effective, and more feasible for many of our customers, they look at that credential set that today might live inside of an Active Directory or inside of a Red Hat IdM, and they question whether they should extend that to those outside public services, whether they should be creating new IDs and passwords out there in that public space, if there's a way that doesn't violate network security principles to tie those systems back into credentials inside their data center.
Of course, here, we are very aware of the other authentication activities in the world, whether this is SAML in the federated authorization space or OpenID and oAuth, and we're developing those strategies around how to leverage those technologies to be able to extend enterprise identity into those cloud services.
Gordon:  Ellen, let's talk a little bit about crypto specifically. I know we've got some new features out there, so maybe if you could explain it without having people's heads explode too much, I think that'd be interesting.
Ellen:  I did want to start by saying I have worked with what we call the crypto geeks for about half of my working life, and I will tell you, you can always spot them in a crowd.
Having said that, all crypto is essentially mathematically based. One of the best protections for any of the cryptographic algorithms that keep your communications and your data safe and locked up is that it takes so long, using computers, to crack the code. As computer power has increased, the algorithms that were in common use are more easily cracked. It takes less time. Cracking an algorithm is all about the compute time it takes to crack it. With the expansion of compute power and the high demand for security, the National Institutes of Standards and Technology ‑‑ known lovingly as NIST ‑‑ recently set out standards and recommendations for what we would call higher‑order cryptographic algorithms, which they call Suite B.
Now, the Red Hat Enterprise Linux 5.10 and 6.5, which just recently went GA, and 7.0, which is in beta now have all included some new cryptography in addition to the original algorithms that they had in Suite B. One of the more interesting pieces of cryptography that has been included is something called elliptic‑curve cryptography. The reason that this is interesting is, for less processing power and less compute power, it offers stronger crypto than had previously been available.
I think the basic point here is that the crypto in Red Hat Enterprise Linux has been updated, which ensures safer communication, safer data at rest and in motion. As the standards change, I just want it on record that RHEL and the feature set in RHEL keep up with the changes and recommendations.
Gordon:  Matt, out in the field, how are you seeing use of crypto out there? Is it increasing? Are people being more aware of the technical details? What are the trends that you see there?
Matt:  Absolutely. Crypto becomes more important every day, but at the same time, the assumption is generally that it is just there. At this point, being able to see that HTTPS in your URL bar in your favorite browser is just an assumed technology ‑‑ "Oh, it's HTTPS, therefore it is secure." Of course, as customers look to move data out, again, into the cloud, or they start expanding where their data lives ‑‑ it's no longer just within the four walls of their existing data center ‑‑ really being able to encrypt that data, in flight or at rest, becomes more and more critical and more and more of an assumption on our customers' part.
Gordon:  Are there changes in the way they're doing key management these days? Encryption is easy. It's the key management that's the hard part.
Matt:  Absolutely. There are a number of vendor products out there for key management, as well as when we look at certificate‑based management, our own certificate management capabilities within Red Hat IdM and within Red Hat Certificate Server. We provide those capabilities, but again, as customers are looking to distribute the geography of their data, this is a challenge in the space that, really, there's still a lot of space left to find proper solutions.
Gordon:  Ellen, maybe we can start to wrap up here. Anything else that you'd like to share?

Ellen:  I would like to say that if customers are interested in any of the new capabilities for identity management in Red Hat Enterprise Linux 7.0, we have instituted a high‑touch beta program specifically for those who are interested in the identity functionality, and customers are still welcome to join that beta program because it runs from now until the middle of March. There's plenty of time to get a look at the new features and, if time permits and resources allow, to give them a test run.