Thursday, January 24, 2013

Links for 01-24-2012

Wednesday, January 23, 2013

Links for 01-23-2013

Friday, January 18, 2013

Links for 01-18-2013

Wednesday, January 16, 2013

Podcast: The three audiences for enterprise Platform-as-a-Service

Platform-as-a-Service, PaaS, is often pushed as a solution for developers. It is that. But enterprise PaaS is equally appealing to system admins and enterprise architects. In this discussion with Red Hat OpenShift product manager Joe Fernandes, we talk about how PaaS brings benefits to each of those constituencies.

Listen to MP3 (0:13:28)
Listen to OGG (0:13:28)


Gordon Haff:  Hello, everyone. This is Gordon Haff, cloud evangelist with Red Hat. And I'm sitting here with Joe Fernandes, the senior product manager for OpenShift, Red Hat's Platform‑as‑a‑Service.
By way of background, Platform‑as‑a‑Service is something that's been getting a lot of attention recently, particularly with hosted services, although now, as with OpenShift, we're also seeing more and more on‑premise offerings. Platform‑as‑a‑Service has historically, I think, been viewed as something for the developer. That is absolutely true, and we're going to talk about that a bit. But one of the interesting trends we see with Platform‑as‑a‑Service, as Joe is going to tell us about, is that we're also seeing a lot of interest from other groups within enterprise IT departments ‑‑ system administrators, enterprise architects.
Welcome, Joe.
Joe Fernandes:  Hi, Gordon.
Gordon:  Why have enterprises been reluctant, in many cases, to adopt Platform‑as‑a‑Service until recently?
Joe:  I think the challenges that enterprises have had with Platform‑as‑a‑Service are just the challenges that they've had with adopting public cloud services in general. Enterprises have a lot of inherent challenges in terms of ensuring security, challenges with compliance that may be specific to their industry or vertical, and so forth. They may have issues like data privacy, depending on what part of the world they do business in, even governance and processes that they've set up, and also need to support a disparate set of applications, new applications, legacy applications. They have complex infrastructure. Much like cloud in general, for Platform‑as‑a‑Service oftentimes these various concerns that you see in enterprises really would preclude them from going full‑force into the public cloud to leverage some of the popular public PaaS services that exist today.
Gordon:  Also, at least in the early days, a lot of those public PaaS services were also really point solutions for, say, a particular programming language.
Joe:  Right, exactly. I think, initially, a lot of the PaaS services that were introduced were focused on a specific language, although these days I think more PaaS vendors are moving to a polyglot platform, which is something that we've always believed in here at Red Hat with OpenShift, supporting multiple languages and frameworks. And as you mentioned, it's a particular concern of the enterprise, because they do have so many different types of applications and in most cases are using multiple languages and different technologies to build those apps.
Gordon:  As I said in my introduction, one of the interesting trends that we're seeing is that groups other than developers are getting very interested in the benefits that PaaS can bring to them. Before we get into that, let's talk about the developer. What does a PaaS do for a developer?
Joe:  For developers, really, Platform‑as‑a‑Service is all about bringing greater agility and giving them a greater degree of self‑service, really removing IT as the bottleneck to getting things done. In public PaaS services like OpenShift, developers can come and instantly begin deploying applications. They can choose from a variety of languages and frameworks and other services like databases and so forth. And they don't need to wait for systems to be provisioned and software to be configured. The platform is all there waiting for them, so they can be productive much more quickly. And really, what that means is that they can focus on what matters most to them, which is really their application code. They can iterate on their designs and really see the applications up and running without having to worry about how to manage what's running underneath.
Gordon:  In a lot of cases, particularly with newer dynamic languages and the like, if the developer never sees the operating system, that's just fine with them.
Joe:  Absolutely, yeah. I think most developers would agree with that.
Gordon:  You talked about some of these other groups as well. Let's talk about system admins. Why are they interested in Platform‑as‑a‑Service?
Joe:  It's been very interesting. Since we launched OpenShift, here at Red Hat, we have received lots of interest in Platform‑as‑a‑Service, and OpenShift in particular, from our customer base and the industry as a whole. But a lot of that isn't strictly limited to developers. You mentioned system administrators. We've spoken to a number of IT operations teams, including system administrators, enterprise architects and so forth, and they see Platform‑as‑a‑Service as a way to help them better serve their developers, helping the business accelerate the delivery of new services, which often comes in the form of new applications, whether they be Web applications or mobile applications.
And being able to standardize the developer work flows. What I mean by that is the process that they need to go through every time that a developer starts on a new project, really standardizing the process and getting them provisioned with the infrastructure they need, with the software they need, so that they can start either developing or doing testing or performance testing, or even getting those applications all the way through to production.
What we've also seen is administrators are interested in how this helps them make better use of their infrastructure. And we talk about this quite a bit. We saw a real sea change, as enterprises moved from running purely physical infrastructures, bare‑metal servers and so forth, to more virtualized environments.
That transition has occurred at most of our accounts, and most developers today that you talk to, they're working with virtualized infrastructure, and in some cases enterprises have set up large virtualization farms where developers can self‑provision VMs, and maybe there's catalogs of templates that they can take advantage of.
That's largely been adopted. The question is, where do they go next? How can they get greater efficiency? How can they make things even bring greater agility to their developers, beyond what they've been able to achieve just with virtualization alone?
Gordon:  Efficiency and agility and speed seem to be something that's really become important. It's sort of interesting. For a while, I think a lot of people were saying, "This Software‑as‑a‑Service thing is going to replace enterprise applications." But really, what we're seeing is, as IT, information technology, becomes more and more central to more and more types of businesses, developing applications in‑house is really that much more important, and there's almost an infinite appetite for it.
Joe:  Yeah. What I think would surprise people is just the sheer number of applications that enterprises are building today and that they're responsible for, and how this multiplies as you introduce new form factors, like mobile versions of the application or applications that want to run in tablet‑based computing devices, iPads and so forth. As you mentioned, even if you go with a Software‑as‑a‑Service‑based solution ‑‑ say, which is very popular ‑‑ even when you go with a solution like that, what you still see is enterprises building applications that will tie into Salesforce or other SaaS applications, third‑party applications that they may bring in‑house. They still need to build complementary applications, whether it's quoting or whether it's for tying in with partners or billing or what have you, things that would basically go beyond purely what they could get from their SaaS provider.
Absolutely, we see no limit to the appetite for new application development within the enterprise customer base.
Gordon:  Actually, when we talk about some of the more popular enterprise Software‑as‑a‑Service, really, they're a Software‑as‑a‑Service from the perspective of the end user, but they're really a specialized Platform‑as‑a‑Service from the perspective of developers.
Joe:  Yeah, that's correct. In a lot of cases, those SaaS applications, as I mentioned, create the need or the opportunity to build complementary services around that that really suit the needs of a specific business and may not be addressed by the SaaS provider themselves.
Gordon:  Let's talk about the third audience here, enterprise architects. You talk to a lot of them, I know. What's their interest in PaaS? What's their angle?
Joe:  Ultimately, enterprise architects are really trying to marry the IT infrastructure, the IT operation, to the needs of the business, right? So they have to understand where the business is going and how IT's going to support that and how to architect their infrastructure, their applications, their processes to address those needs. What we've already mentioned is that there's a tremendous growth in demand from the business for new services, new applications, new mobile applications, new web services and so forth. And the question is, how is IT going to support that demand? Really, it falls on enterprise architects to help figure that out.
We've seen that interest in different forms. One is just, again, looking from just a pure developer‑agility perspective. How can they make developers more efficient and reduce the bottlenecks within the IT process so that developers can quickly get up and running when a new application product is initiated?
Something like Platform‑as‑a‑Service, where, again, developers have self‑service capabilities, they have a catalog of middleware and database services at their disposal that they can use, choices of languages and frameworks and so forth that they can use to start in on their projects, that brings a lot of efficiency in terms of this whole work flow.
But then you flip it around to looking at infrastructure utilization. Again, virtualization has brought a lot of efficiencies in terms of making better use of physical infrastructure resources. Platform‑as‑a‑Service goes further in terms of introducing this concept of VM‑level multi‑tenancy and so forth. In a Platform‑as‑a‑Service, what the provider is doing is oftentimes running not a single application per VM host but actually running several applications and making use of things like Linux multi‑tenancy.
Again, particularly as these applications get smaller and as you start seeing larger numbers of applications that you have to support, thinking about how the infrastructure is going to be able to scale to support those new applications from a hosting perspective is something that the enterprise architects and the IT administrators need to think about.
It really is both sides. How do you make the development process itself more efficient, and how do you make infrastructure that supports that process more efficient and more scalable to allow the business to grow?
Gordon:  Without getting into the latest trendy terms and the like, it's also a reality, whether you call it DevOps, ITOps, NoOps, whatever kind of ops, these different audiences are much more closely connected and have to, to some degree, work in each other's worlds to a greater degree than was historically the case.
Joe:  Yeah, exactly. Like I said, the industry will cycle through its buzz terms and concepts and so forth, but you're never going to eliminate the role of the IT operations team in an enterprise context. What you need to do is figure out how the operations team can work more effectively with the development side of the house to meet the needs of the business. To me, it's not dev or ops. It's really both. The developers aren't going to take over the job that the IT operations team does any less than the IT operations team is going to be able to build and deploy their own applications and so forth.
The question is, how do both sides work more effectively together? How do they reduce friction and really help accelerate time to market? Because, ultimately, that's all the business cares about. Business cares about when they can get their new service and how quickly they can start leveraging that, whether it's an internal or external application that they're looking for, and it's incumbent on IT organizations, operations team, as well as developers, to help figure that out. That's really what we're trying to do with Platform‑as‑a‑Service is to drive that process forward.
Gordon:  Thank you very much, Joe.
Joe:  Yeah, thanks a lot, Gordon. It's been great talking to you today.

Tuesday, January 15, 2013

Podcast: Cloud identity management with Ellen Newlands

In this podcast, Red Hat cloud security product manager Ellen Newlands discusses:
  • The changing security perimeter
  • Interoperability and cross-donain trust in heterogeneous hybrid cloud environments
  • The role of open standards in hybrid cloud identity management
  • How to approach identity in the cloud

  • Listen to MP3 (0:11:40)
    Listen to OGG (0:11:40)

Gordon Haff:  Hi, everyone. This is Gordon Haff, cloud evangelist with Red Hat, and I'm sitting here with Ellen Newlands, who's the product manager for our cloud security products. Welcome, Ellen.
Ellen Newlands:  Thank you, Gordon.
Gordon:  Ellen, could you briefly give a little bit of background about yourself and security?
Ellen:  Sure. I have a pretty extensive background in security, and particularly in identity and access management, and more recently, in cloud applications for identity and access.
Gordon:  Great. Well, it's certainly a hot topic in cloud. In fact, I'd like to start off this conversation with a quote from Chris Perretta, the CIO of State Street, in a recent interview he had in Forbes. He says, "I think we'll see the day where our cloud will be accessible to our clients. In fact, it is today. We're building features here where customers actually load their data, and we keep data on their behalf."
I don't think I've read anything that says to me quite as strongly that security is not about the network perimeter any longer. It's about verifying the people who are inside your systems.
Ellen:  I would agree, Gordon. One of the things increasingly that we are seeing with the new technology, cloud in particular, is that from a company point of view, there really is no inside or outside anymore. Knowing who you're doing business with, and who has the right to access what, has become increasingly important when you really don't control your perimeter, and frankly, when there isn't a perimeter anymore.
Gordon:  How do you go about doing this?
Ellen:  One of the things that we have been doing a lot of work on is putting together centralized identity and access management as a feature set within Red Hat Enterprise Linux, to make it a lot easier to manage identities in a centralized context as a foundation for moving into virtualization, and of course, into open hybrid cloud.
Gordon:  You mentioned that "hybrid" word with cloud, and that's obviously a hot topic these days; Gartner is talking about hybrid IT. We're certainly emphasizing open hybrid clouds because it's what our customers are asking us for. How do you handle identity in that kind of distributed world?
Ellen:  I think part of it is how you look at the environment that you're working in. When you say "open hybrid cloud," that includes a company's "private" cloud as well as public clouds, for example, Amazon and others of that sort. What you're seeing is balance between use of the cloud on‑premise, by the enterprise, and the applications and capabilities that will we put into a public cloud. That's one way of looking at a hybrid cloud. One of the things that we do in identity and access management in Red Hat Enterprise Linux is we make it very easy to set up and manage the identities in a Linux environment.
Gordon:  It's really even a little more complicated than you just said, isn't it though? Because it's not just about having private resources and public resources. It's also about having heterogeneous private and public resources, including, for example, Windows systems in many cases.
Ellen:  I think you make a very good point here, Gordon, which is, many large customers, and even some of the medium sized, will have a heterogeneous environment. Windows is very, very popular, of course, and what we are seeing is that the ability to work well in a heterogeneous environment is very important for identity and access management, and by definition, that really means the ability to interoperate smoothly with Active Directory.
Gordon:  How do you do that?
Ellen:  One of the things that we just shipped in Red Hat Enterprise Linux, the most recent release, is what we call a tech preview of something that we give the name of "Kerberos‑based cross‑domain trust." You might say, "Well, what does that mean? What is that to me?” Both Active Directory and our own identity management in Red Hat Linux use the Kerberos standard as developed by MIT. The Kerberos standard has recently been expanded and updated to allow the Kerberos tickets to carry not only the authentication, meaning the identity, but the attributes. This change in Kerberos allows us to set up a trust between Active Directory and our own identity management.
Gordon:  I think this really points to how important openness is in these hybrid environments, because we're not just talking about open source, but you've just described an open standard that's important to get security authentication across these types of hybrid environments.
Ellen:  Absolutely right, because both Microsoft and Red Hat support the MIT Kerberos standard. We now make it very easy for an end‑user on a Windows client with identity registered in Active Directory to gain access through a trust from Active Directory to identity management in RHEL, to many of the Linux services, within what we call an enterprise single sign‑on. This makes transactions very smooth for the user, and much, much easier to manage for both sets of administrators. It enhances both security and, to some degree, compliance.
Gordon:  One of the challenges in a hybrid environment is that all this stuff is, well, supposed to work together, and I think as any of our listeners, and as you know, that doesn't just auto‑magically happen.
Ellen:  No, but the good news is with a cross‑domain trust, you need very, very few changes in either one of these, I'm going to call them, domains. The Active Directory and the identity management for Linux, I need very, very few changes to just refer to one another. To pass the credentials from one to the other makes it much, much simpler. There's no syncing, there's no being out of sync, need much less installation or management hassle than you might've in the past.
One of the advantages too is, in this way, when using identity management in Red Hat Enterprise Linux, since it's designed for native Linux, it allows you to do some very native Linux type things, like sudo rules, etc.
Gordon:  What type of testing do we do that helps ensure this stuff really does work together?
Ellen:  Of course, we do interoperability testing of this functionality to make sure that the Windows client, Windows end‑user, is able to access the Linux services requested through this cross‑domain trust.
Gordon:  If I can maybe take this up a little bit to a higher level, as people are moving to clouds, to open hybrid clouds, what are some of the things they should be thinking, and some of the things they really need to be careful, about as they're setting up their authentication systems?
Ellen:  I think people want to be assured that whoever is accessing services has the right to access those services, and accesses only those services that they have the rightful privileges to access. I think that is fundamentally, the right people get access to the right information at the right time. I think that thinking about how to manage that is very important. Partly for that reason, we do a lot of work on interoperability with the Red Hat products that really function in open hybrid cloud, so back‑ending, for example, CloudForms or OpenShift, working with the OpenStack committee is very important, again, to ensure that the right people have access to the right capabilities at the right time.
Gordon:  What is some of the stuff going on in the identity cloud-related spaces that you think is really interesting right now, that people ought to be keeping their eyes on?
Ellen:  The thing that I think is very interesting now is a lot of the work in the past has been what I call point‑to‑point. It's required a lot of interoperability, setup, it's required a lot of, perhaps, contractual work, etc. Increasingly, you're seeing through some of the newer standards, and I will reference things like OpenID and OAuth, that it becomes easier and easier to interact from one service and one area in the cloud to another with a recognized standard. The standards in the back‑end, the standards for enterprise, identity, are being merged and used as back‑ends for some of the newer cloud areas.
The other thing that I think is very interesting is that the Federal Government has taken a very active in outlining the kinds of security, and identity and access management, that makes for a good, secure cloud computing. I think you see this with the OMB, with FedRAMP, with the new standards from NIST.
Gordon:  Of course, Cloud Security Alliance has also done a lot of work in outlining some of the specific things and security and compliance areas that people need to pay attention to.
Ellen:  Yes. As you know, Red Hat is a member of the Cloud Security Alliance, and I do think for people who are looking to set up a cloud, open hybrid cloud, any kind of cloud, you can get very, very good guidance in what to look at across the whole spectrum of cloud security from the outlines and tips that the cloud security alliance provides.
Gordon:  One thing that, for me, I think is a little bit encouraging is that we seem to be, for the most part, moving beyond "the cloud is secure, the cloud is insecure," sort of discussions, to really much more specific conversations around, say, specific aspects of compliance.
Ellen:  I think that's absolutely true. The cloud is not a monolith, and in some ways, I would submit that the cloud doesn't really exist. There are so many use cases, and one of the things that you learn early in security is to design the security level that you need for the value of the information and where you're placing it. I think that gives people a range of options of what they would put in a public cloud, what they might keep on premise, and how much or how little protection that information may need. Certainly not everything is Fort Knox.
Gordon:  Thank you, Ellen. I've been speaking with Ellen Newlands, the product manager for cloud security at Red Hat.

Friday, January 11, 2013

Links for 01-11-2013

Recovering corrupt Lightroom catalogs

Back in December, something disconcerting happened. My Adobe Lightroom 4 catalog wouldn't load. Disconcerting because I had something on the order of 50,000 photos in that catalog and recreating the whole shooting match would be a frightful chore. Indeed, I wasn't even quite sure how frightful that chore would be. 

However, I keep good backups in various forms and I was able to find a recent backup of my catalog that was happy enough to load. (The catalog does not include the photos themselves but it does include much of the information related to their organization and other metadata.)

But the problems didn't go away. And, today, even after a migration from a Windows system to a new Mac Mini, they came back in spades. I simply couldn't make a backup of a catalog from within Lightroom, which also refused to optimize it. I could pull in a somewhat earlier backup of the catalog through the Mac's Time Machine backup program, but there was obviously something deep wrong here. And Lightroom's repair catalog function refused to do anything useful.

As it turns out though, hat tip to Stephen Shankland for the info, Lightroom uses a SQLite3 database which has various tools one can use to fix corrupted databases.

The first thing I did was download SQLite3 and SQLite3_analyze. 

SQLite3_analyze confirmed there was indeed some sort of problem. "database image is malformed" it told me. Some searching never did tell me what that means exactly but, in any case, it confirmed there was a problem with the database.

The real find came from Gerhard Strasse's blog. Read the blog but I just want to add a few comments.

Basically the steps are pretty simple.

First you dump the existing catalog into a text file as a bunch of SQL commands:

echo .dump | ./sqlite3 ~/lightroom_catalog.lrcat > ~/lightroom_catalog.sql

then, in theory, you can just suck that text into a new database file with:

./sqlite3 -init ~/lightroom_catalog.sql ~/lightroom_catalog_restore.lrcat

The blog notes that you may get an error or two (i.e. the duplicate keys or whatever that were causing the database issue) but it should work. It didn't in my case. (lightroom_catalog_restore.lrcat was a zero byte file on the first try.) The secret for me was at the end of the comments.

Reader Chairat Juengmongkolwong noted that if, after the first step, you go into the text file (lightroom_catalog.sql in this case) and replace the last line that says:

ROLLBACK; — due to errors

with the line:


And THEN do the 

./sqlite3 -init ~/lightroom_catalog.sql ~/lightroom_catalog_restore.lrcat

it should work. Which it did for me. I never did see any errors so I'm not sure what the problem was. Hopefully, it is fixed now. 

Obviously, if you've never touched a Unix/Linux command line this is probably a bit intimidating and you'll probably need to enlist some help. Make copies and work on backups!  This process can doubtless be done on Windows as well although the details will be a bit different. The terminal console on a Mac is a Unix command line. I'd also note that if you have a very large catalog—mine is about 500MB—a lot of text editors won't be able to handle it. I ended up downloading and installing Vim which handled what a couple of others couldn't. 

Another possible  option, BTW, is to fire up a new catalog and import your existing (corrupt) catalog although this will cause you to lose any publishing services that you have setup and possibly other things as well. However, when I did this, it only imported a quarter or so of my photos. (Presumably it quit at the spot of the error.)

My configuration was a Mac Mini running Mountain Lion and Lightroom 4.3.

The one other thing I might add is that, given this is a fairly straightforward procedure, and one that is fairly standard for this particular SQLite database error as far as I can tell (whether for Lightroom or otherwise), it's unclear why it isn't embedded within Lightroom's own corrupt database recovery process.

Thursday, January 10, 2013

The possibilities of shorter books

In 2009, Philip Greenspun observed the following:

The pre-1990 commercial publishing world supported two lengths of manuscript:

1. the five-page magazine article, serving as filler among the ads

2. the book, with a minimum of 200 pages

Suppose that an idea merited 20 pages, no more and no less? A handful of long-copy magazines, such as the old New Yorker would print 20-page essays, but an author who wished his or her work to be distributed would generally be forced to cut it down to a meaningless 5-page magazine piece or add 180 pages of filler until it reached the minimum size to fit into the book distribution system.

What got me thinking about this again was the publication of my former colleague Stephen O'Grady's book, The New Kingmakers. (Grab a copy now. It's free. Really. Right now. I can wait. Back now? OK.)

It's relatively short at 50 pages but, as Stephen wrote me on twitter "between Race Against the Machine & the Kindle Singles model, i didn't see the point in stretching to meet artificial expectations." (Race Against the Machine is another recent non-fiction book by MIT's Andrew McAfee and Erik Brynjolfsson. It weighs in at 98 pages.) 

For their part, Kindle Singles and other "e-singles" are apparently increasing in popularity. As noted in Laura Hazard Owen's post "Why 2012 was the year of the e-single":

In February, I reported that the company had sold two million Kindle Singles; as of September, that number was up to 3.5 million, and Amazon just expanded the program to the U.K., where it will include new entries by bestselling British authors as well as most of the American Kindle Singles.

And The New York Times generated lots of buzz with its beautifully executed "Snow Fall: The Avalanche at Tunnel Creek" feature.

That said, I expect the business models, especially when publishers are involved, will need some shaking out. Owen also noted that "With most Kindle Singles priced at $1.99, that’s only $7 million or so — and Amazon only takes 30 percent of it, making the revenue basically a rounding error." For conventionally published printed works, I suspect the economics will be even more challenging in most cases. Nonetheless, as e-books become ever more mainstream and reader/buyer expectations become less anchored by traditional publishing forms, I expect written works to start gravitating towards their natural lengths. (Although, realistically, as anyone who has been exposed to the endless over-length series in genre fiction will tell you, there will probably always be certain incentives to use more words rather than fewer.) 

This discussion is especially pertinent to me because I'm in the process of wrapping up a book on cloud computing that I'll be publishing through Amazon's CreateSpace. I fussed around a fair bit regarding the "proper" minimum length for the book. At the end of the day, the book is going to hit a fairly conventional 60,000 word, 250 page or so length but it probably didn't need to. I don't really see anything in the book as filler, but I might have done some things differently if I weren't determined to hit a 50,000 word or so minimum target.

Wednesday, January 09, 2013

Links for 01-09-2012

Tuesday, January 08, 2013

Deploying and managing OpenShift Enterprise PaaS

We did a lot in the cloud computing space last year at Red Hat. We shipped our open hybrid cloud management software, CloudForms. We further ramped up our OpenStack activity with a technology preview. We acquired ManageIQ. We developed integrated cloud solutions based on the full Red Hat product portfolio. 

But that's not all. One of the most exciting announcements of the past year was an on-premise version of Red Hat OpenShift Platform-as-a-Service. We'd been racking up new users and new applications—and expanding the functionality—on the hosted offering. The individual developers and IT departments who tried it out really liked the way they could develop using their choice of languages and frameworks and customize the development environment to meet their needs. Simply put, OpenShift Online (as it's now called) takes a lot of friction out of software development without limiting flexibility.

However, much as some customers appreciated the functionality of the online service, they wouldn't be able to make widespread use of it so long as it was only available in hosted form. To address these requirements, last November Red Hat introduced OpenShift Enterprise, an on-premise version of OpenShift PaaS. (For more background on enterprise PaaS, you can check out this whitepaper.)

Unlike with a hosted service, an on-premise PaaS has to be deployed and managed. To assist with this process, Scott Collier and Steve Reichard on Red Hat's Systems Engineering team have put together a new reference architecture: "Deploying and Managing a Private PaaS with OpenShift Enterprise." The full reference architecture weighs in at almost 150 pages and includes lots of screen shots, configuration file entries, and command line text.* 


The reference architecture shows how to deploy OpenShift Enterprise in a distributed way that separates domain name services, ActiveMQ, and MongoDB from the OpenShift Enterprise broker host. It also shows how to deploy both PHP and Java applications, enable applications with Jenkins continuous integration services, and use JBoss Developer Studio within an OpenShift Enterprise environment.

The broker is the single point of contact for all application management activities. It manages user logins, DNS, application state, and application orchestration. The broker can be communicated with programatically using a RESTful API or, alternatively, through a Web console, command line tools, or JBoss Developer Studio. ActiveMQ provides the messaging infrastructure for the broker. MongoDB provides the persistent data store. In the case of this reference architecture, the broker components were set up redundantly to provide high availability for OpenShift Enterprise's management system.

The reference architecture shows how to set up a load balancing cluster of OpenShift Enterprise brokers. The Load Balancing Add-on for Red Hat Enterprise Linux provides support for TCP load balancing independent of applications. It is composed of two major components: the Linux Virtual Server (LVS) and the Piranha Configuration Tool, a management tool with a GUI.

OpenShift applications run on nodes. Application multi-tenancy on each node is provided through SELinux and Cgroup restrictions that isolate each application's resources and data. Nodes can be added as required to provide the computing resources required by the applications running on an OpenShift Enterprise installation. The reference architecture demonstrates deploying the node software using the Red Hat Network (RHN) as well as Red Hat's content delivery network (CDN). This includes installing the Marionette Collective (MCollective) server orchestration framework packages on the node. 

Gears are containers with a set of resources that allow users to run applications within a node. Districts define a set of node hosts and the resource definitions they must share in order to allow transparent migration of gears among hosts. They're not required but they're simple to use and any production installation can benefit from them. Districts allow a gear to keep its same ID when moved between any node hosts within the same district. Thus, users' apps will not be disrupted by changes when they are migrated between nodes even if they have hard-coded values. 

There's a wealth of additional detail in the reference architecture but the above should give a high-level taste. 

Increasingly, we see organizations looking to hybrid IT and hybrid cloud as detailed in this Gartner report. The addition of OpenShift Enterprise to our OpenShift Online service brings hybrid to PaaS. You don't need to choose. You can have both.

* The full configuration files are only available to Red Hat customers as part of the value-add of a Red Hat subscription.

Monday, January 07, 2013

State Street: Cloud isn't just a technology exercise

Over at Forbes, Joe McKendrick did a nice interview with Chris Perretta, the CIO of State Street, about the cloud computing initiatives going on there.

There's not a lot of detail about the technical aspects. What's there though are some nice insights into how State Street thinks about cloud computing and how they're using it to transform their business. A couple of excerpts:

We use cloud as a shorthand; it really is a complete view of the application delivery mechanism, the application frameworks that we use, and our ability to share real core components of our technology environment.  A large chunk of what we do we write, and are realizing a lot of savings, efficiencies and capabilities in sharing new development frameworks across our enterprise, built into our cloud infrastructure. We can really move our development activities a lot faster than we have in the past.  We’re also turbo charging our efforts to provide much more data insights for our customers, opposed to just top-level processing than we have done for them in the past.

A couple of things jump out for me. The first is that cloud isn't about plopping down some piece of technology. Rather, it's really about transforming enterprise IT. Common sense, perhaps, but I still find there's often too much focus on point products in cloud discussions. The second is the discussion around accelerating development. Everything I see suggests that, far from turning software into some kind of standardized utility, cloud computing seems to be ushering in what Gartner's Eric Knipp has called "a golden age of enterprise application development." (I discussed this trend in more detail last December.)

I think we’ll see the day where our cloud will be accessible to our clients — in fact, it is today. We’re building features here where customers actually load their data, and we keep data on their behalf. They can manipulate that data, they can join it, they can look at it in different ways, they can in essence write applications.

I'm not sure you could read a much more emphatic example of why you can't gain security just through defending the network perimeter. There is no inside and outside. Or, really, everyone's inside. Sharing happens pervasively. That's not to say there's no difference between a private cloud and a public cloud; indeed, State Street made a deliberate decision to build own cloud rather than using public cloud resources. But even if you've got a private cloud, you need to focus on things like multi-tenant security—which Red Hat's Matt Hicks has spoken and written about extensively.

I suggest reading the whole interview. It's not long but good stuff. State Street was a 2012 Red Hat Innovation Award winner for "OUTSTANDING OPEN SOURCE ARCHITECTURE: Recognition of a combination of Red Hat's platform, middleware, cloud and/or storage solutions to create innovative architectures based on Red Hat solutions."