Links:
Listen to MP3 (0:15:35)
Listen to OGG (0:15:35)
[Transcript]
A few factoids about the success or lack thereof of cloud-related projects have been making thee rounds of late. This is an edited version of my response to a recent query from a journalist about this topic.
A lot of large IT projects do fail. For example, a 2012 McKinsey study found that "on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted." And 17 percent went so bad that the very existence of the company was threatened. What's a typical number? That's hard to say because it depends on so many factors--perhaps most notably size and length of the project. Big ERP projects are the poster children for IT project failure with failure rates of at least 25 percent commonly cited.
Here's my take on two specific numbers that I’ve seen cited recently. I’m inclined to interpret them rather differently from each other although they do have aspects in common.
The first number comes from Capgemini Consulting: "Only 27% of the executives we surveyed described big data initiatives as ‘successful.’" Although the details are much different--not least by the dominance of open source technologies within current big data storage (including from Red Hat) and analysis solutions--it feels as if we're in somewhat the same place as we were amidst all the data warehousing hype in the mid- to late-90s. There's this feeling that with all the data out there, we must be able to do *something* with it even if we don't know the right questions to ask or the right models to apply.
So I chalk up a lot of the failures happening in the big data project space to projects not having a clear goal and a clear path to that goal. If you look at Gartner studies, for example, the #1 and #2 big data challenges are "Determining how to get value from big data" and "defining our strategy." Many organizations are undertaking big data projects mostly because it's something they think they ought to be doing even if they don't know how or why. Of course they're not going to succeed!
What should they do? Well, don't do that. Look at the success stories that do exist and ask if you could do something similar. But be careful. Many of those stories are something between Photoshopped reality and myth.
Technologies are shifting too--for example, the emergence of in-memory processing with Apache Spark. Software-defined storage (both file and object) is also maturing rapidly and coming into its own. So it is challenging to pick the right technologies and apply them to a specific problem. We've seen far too much "The answer is Hadoop. What's the problem?" But the bigger problem is not having sensible and actionable objectives.
On the cloud side, Tom Bittman's informal poll and some of his associated Gartner research about private clouds highlight quite a few reasons (perhaps most of all organizational ones) for problems that cloud projects encounter. But it's worthwhile noting that that his quote was "95% of the 140 respondents (who had private clouds in place) said something was wrong with their private cloud.” It's the rare IT project (or indeed any type of project of any consequence) that doesn't have at least some problems. So I'm less inclined to focus on the 95% and rather on the steps that can be taken to reduce the number of problems: Set objectives, get the right organization in place, engage with the right partners, take an iterative approach and don't try to do everything at once.
At Red Hat, we've been doing a lot of work with clients in both the Infrastructure-as-a-Service (Red Hat Enterprise Linux OpenStack Platform) and Platform-as-a-Service (OpenShift by Red Hat) spaces. Part of our involvement is delivering supportable enterprise product subscriptions of course. But it's also working with customers to implement a proof of concept, often with a small consulting team. It doesn't need to be a mega-project (and it's usually better if it isn't) but some initial guidance by consultants who have done this before can go a long way toward getting projects headed in the right direction and having fewer problems as a result. We’ve also partnered with other vendors including Dell (Dell Red Hat Cloud Solution, Powered by Red Hat Enterprise Linux OpenStack Platform) and Cisco to create integrated infrastructure solutions.
Pretty much no project of any size is going to go without a hitch. But understanding your objectives going in and working with the right partners can make a big difference.
Over at VentureBeat, Jerry Chen of Greylock Partners writes:
We are entering the age of developer-defined infrastructure (DDI). Historically, developers had limited say in many application technologies. During the 1990s, we effectively lived in a bilateral world of Microsoft .NET vs Java, and we pretty much defaulted to using Oracle as a database. In the past several years, we have seen a renaissance in developer technologies and application infrastructure from a proliferation of languages and frameworks (Go, Scala, Python, Swift) as well as data infrastructure (Hadoop, Mongo, Kafka, etc.). With the power of open source, developers can now choose the language, runtime, and database that make sense. However, developers are not only making application infrastructure decisions. They are also making underlying cloud infrastructure decisions. They are determining not only where will their applications run (private or public clouds) but how storage, networking, compute, and security should be managed. This is the age of DDI, and the IT landscape will never look the same again.
In part, this reflects developers as The New Kingmakers as my former colleague RedMonk’s Stephen O’Grady has eloquently written about. Like any meme, the ascendency of developers as IT decision makers can be overstated. Developer
s flock to Apple’s app store for the same reason that Willie Sutton robbed banks. It’s where the money is. Not because it’s a wonderful developer-focused experience. Nor are we living in a NoOps world, a term that caused a bit of a furore a couple years back.
That said, many of the most interesting happenings in enterprise software today have a distinct developer angle whether or not they’re exclusively built around developer concerns. Containers and their associated packaging, orchestration systems, and containerized operating systems (like Red Hat Enterprise Linux Atomic Host/Project Atomic) certainly. An expanding landscape of programming languages. (To quote Stephen O’Grady again: "an environment thoroughly driven by developers; rather tha
n seeing a heavy concentration around one or two languages as has been an aspiration in the past, we’re seeing a heavy distribution amongst a larger number of top tier languages followed by a long tail of more specialized usage.”) And even much of the action in data is at least as much about the applications and the analytics as about the infrastructure.
However, to Jerry Chen’s basic point, it’s also about separating the concerns of admins and developers so that each can work more effectively. As I’ve written about previously, this is one of the reasons why a Platform-as-a-Service (PaaS) such as OpenShift by Red Hat, is such a useful abstraction. It's a nicely-placed layer from an organizational perspective because it sits right at the historical division between operations roles (including those who procure platforms) and application development roles—thereby allowing both to operate relatively autonomously. And in so doing, it helps to enable DevOps by providing the means for operation
s to setup the platform and environment for developers while the PaaS provides self-service for the developers and takes care of many ongoing ops tasks such as scaling applications.[1]
[1] A PaaS like OpenShift also enables DevOps in other ways such as providing tools for continuous integration and a rich set of languages and frameworks but I wanted to focus here on the abstraction.