- What makes the cloud secure or not?
- What new challenges the cloud brings
- Social engineering and security
- Two factor authentication
- Why open source and hybrid clouds are important
Listen to OGG (0:18:20)
------------------
[TRANSCRIPT]
I'm wrapping up the week putting the final touches on my Beyond Open Source in the Cloud presentation for CloudOpen week after next. The next week is going to be crazy; I need to get everything ready for VMworld and CloudOpen and then vacation in the Sierras the week after that.
As for the topic at hand, through, here's the description from the program:
Openness doesn’t stop and end with the submission of some format to a standards body or with the announcement of partners endorsing some specific technology platform. It doesn’t stop and end with open source either. An open cloud isn’t about having some singular feature. It’s about maximizing a wide range of characteristics that push the needle from closed to truly open. These include open source and open standards for sure. But they also include portability of applications and data, viable and independent communities, freedom from IP encumbrances, and APIs that are independent of specific implementations.
I've previously given a "lightning" version of this presentation at CloudCamp and some of the material is touched on during my broader cloud presentations. However, for this event, I've fleshed out my discussion of the various aspects of openness. The whole topic is very timely.
One need only look to Twitter API Apocalypse version 2,654 this past week to see just how timely. (And the fact that there's a story about APIs on CBS says something about just how important APIs--and, by extension other aspects of openness--to the modern computing world even for those who have never written a line of code in their lives.
My paper: Why the Future of the Cloud is Open
I'm on Wednesday, August 29 right after the keynotes. Come to San Diego and join us!
My Hawaii pix are up on Flickr: http://www.flickr.com/photos/bitmason/tags/hawaii/
As part of Red Hat's announcement of an OpenStack technology preview today, I wrote a blog that provides some additional background. Here, I'm going to delve a bit more deeply into one of the topics that I cover in that blog--namely, how do the different pieces of Red Hat's open hybrid cloud portfolio fit together? I'll be referring to the below diagram throughout this discussion.
From Blog |
First, there is the infrastructure layer. This typically [1] consists of a hypervisor, its associated infrastructure management stack, and APIs providing the ability to control that management stack programmatically.
This is where OpenStack plays. OpenStack is an IaaS solution that manages a hypervisor and provides cloud services to users through self-service. (The OpenStack project supports a variety of hypervisors to various degrees; Red Hat is focused on KVM--the hypervisor used by Red Hat Enterprise Virtualization--which is part of Linux and has become pretty much the default open source hypervisor.) Perhaps the easier way to think of OpenStack, however, is that it lets an IT organization stand up a cloud that looks and acts like a cloud at a service provider. That OpenStack is focused on this public cloud-like use case shouldn't be surprising; service provider Rackspace has been an important member of OpenStack and uses code from the project for its own public cloud offering.
This IaaS approach differs from the virtualization management offered by Red Hat Enterprise Virtualization, which is more focused on what you can think of as an enterprise use case. In other words, Red Hat Enterprise Virtualization supports typical enterprise hardware such as storage area networks and handles common enterprise virtualization feature requirements such as live migration. Both OpenStack and Red Hat Enterprise Virtualization may manage hypervisors and offer self-service—among other features—but they're doing so in service of different models of IT architecture and service provisioning.
Alternatively, the self-service infrastructure may be at a public cloud provider such as Amazon Web Services or Rackspace. Ultimately the goal is to make the underlying infrastructure decisions largely transparent to the consumer of the resources, such as a developer. Of course, where the resources are located, how they are managed, and what types of hardware functions they expose make a big different to the ops team. But they're deliberately abstracted from those developing and using applications.
Then there is open, hybrid cloud management of those “cloud providers.” These providers can consist of the various types of infrastructure just described: on-premise IaaS like OpenStack, public IaaS clouds, and virtualization platforms (not just a hypervisor) like Red Hat Enterprise Virtualization or VMware vSphere. This is where Red Hat CloudForms comes in. CloudForms allows you to build a hybrid cloud that spans those disparate resources. It lets you build a "cloud of clouds" in a sense.
However, equally important, is that CloudForms provides the lifecycle management of the content and images that will run across the hybrid cloud infrastructure. For example, CloudForms lets you specify content repositories which feed the construction and ongoing management of single- and multi-tier applications through Application Blueprints created by IT administrators. These Application Blueprints also embed policy. When a user chooses an available application environment through the self-service interface, it can only be deployed to a location enabled by policy. For example, development environments may be deployed to a public cloud while production applications may be deployed to an on-premise virtualization platform.
Platform-as-a-Service (PaaS) is delivered by Red Hat OpenShift PaaS. PaaS is perhaps best thought of as an abstraction focused on the typical concerns of developers. Thus, instead of an operating system image-centric view (as an IaaS provides), PaaS is more oriented to a view that revolves around pushing and pulling code into and from repositories; the operation of the software needed to run that code is largely kept in the background.
Unlike a PaaS that is limited to a specific provider, OpenShift PaaS can run on top of any appropriately provisioned infrastructure whether in a hosted or on-premise environment. It then provides application multi-tenancy within the operating system images that make up the infrastructure. It does so using a combination of Linux Containers, SELinux for security isolation, and other Linux features. Red Hat's Matt Hicks spoke with me about some of these technologies in an interview a while back (podcast and transcript).
This approach allows organizations to not only choose to develop using the languages and frameworks of their choice but to also select the IT operational model that is most appropriate to their needs. The provisioning and ongoing management of the underlying infrastructure on which OpenShift PaaS runs is where virtualization, IaaS, and cloud management solutions come in. (After all, someone needs to operate the PaaS infrastructure whether it's on-premise or at a cloud provider.)
Nor does Red Hat Cloud end with "cloud products." For example, Red Hat Enterprise Linux--in addition to providing features used by offerings such as OpenShift--also provides a consistent and reliable runtime for applications as they move across different environments such as on-premise and public clouds. Red Hat Storage (from our Gluster acquisition) provides a distributed, scalable, software-only filesystem that will be an important part of data portability across clouds.
Sound complicated? It is a bit, I guess. But when you're talking about such a big change in the way that IT systems are operated and applications are consumed, some complexity is unavoidable. (Which is one reason we're so focused on solutions. But that's a topic for another day and another blog post.)
-------------------
[1] In a future version, CloudForms will be able to provision "bare metal" physical servers using Foreman/Puppet components. In this respect, CloudForms includes the ability to build an IaaS. However, for our purposes here, I'm going to focus on how CloudForms builds hybrid cloud resource pools on top of IaaS and virtualization management products and manages the applications running in those pools.
Derrick Harris at GigaOm has a piece up about the "IT world's love-hate relationship" with OpenStack. It seems a balanced piece overall even if a lot of "Why the hate" either boils down to pre-foundation governance issues or generalized "concerns." The cynical might be inclined to label some of this as FUD coming from those with commercial interests opposed to OpenStack. If you attended GigaOm's Structure 2012 conference, you saw some of this dynamic in play in the debate over APIs. In a nutshell, does a popular de facto API like AWS trump APIs that are actually open? Contrary to the fervent denials one hears, from where I sit, there is very much an anti-OpenStack camp.
On the "love" side described by Harris, I'd add that, in addition to the "mega-vendor" and large end-user backers, there's also huge breadth of participation; April's OpenStack Conference in San Francisco had over 1,000 people registered. It's hard to argue against the proposition that OpenStack has a lot of momentum going for it.
Leaving aside the pro/con snippets though, Harris' overall conclusion strikes me as fair. Whether or not you agree that all the knocks on OpenStack that Harris details are truly newsworthy, his overall conclusion is pretty positive.
Perhaps it’s just par for the course that any project with so much hype, representing such a lucrative opportunity, and comprised of big egos all around is going to be a hotbed of in-fighting and allegations. But if the companies involved can hold OpenStack together enough to keep everyone headed in the same direction, it’s hard to see how it won’t be a major factor in the cloud space for a long time to come.
My employer, Red Hat, is a platinum member of the OpenStack Foundation.
From Mobile Commerce Daily:
Following a successful test on five routes, Amtrak is expanding a digital ticketing program to all trains, enabling passengers to use their smartphones to present tickets to the conductor...
Passengers using a smartphone or other mobile device can present the eTicket to the conductor by opening the document from the email.
ETickets can also be printed from any printer, including at Amtrak ticket offices and Quik-Trak kiosks.
Additionally, passengers can also buy tickets and display eTicket bar codes with the Amtrak mobile application.
With the eTicket program, passengers can also easily change reservations and lost or misplaced tickets can be easily reprinted.
Following a successful test on five routes, Amtrak is expanding a digital ticketing program to all trains, enabling passengers to use their smartphones to present tickets to the conductor.
Nice. I actually don't find mobile tickets all that big a win with the airlines most of the time; it's just not usually that big a deal to get a boarding pass from an airport kiosk. (Although I typically print out my outbound boarding pass at home when I remember.)
But I use Amtrak, most often from Boston to New York, differently than I use planes. The timing of my outbound leg is pretty set--way too early in the morning. But, for my return, I'm usually in the position of guessing how an event or set of meetings is going to play out and taking a stab at a return time that may or may not turn out to make a lot of sense.
The problem is that, although you can change train ticket times without penalties, in practice dealing with lines at Penn Station and dealing with call queues with reservations can make changing a ticket more hassle than it's worth. If mobile tickets make the process a lot more streamlines, that's a big win.
Another mentioned advantage is that current Amtrak tickets, once issued, are essentially just like cash--as I know from experience--and are very hard to get refunded/replaced just as airline tickets once were.
For most purposes, this shift away from significant value being embedded in arbitrary bits of paper is a welcome one--but it does raise the stakes on back-end infrastructure. It has to be resilient and scalable. The network pipes going in and out have to be solid. It also potentially creates complications if always-connected mobile devices aren't, in fact, always connected (although mobile apps that store past transactions can help).
Because, increasingly, there just won't be a good manual fall-back if the digital systems don't work.