Thursday, February 28, 2013

Links for 02-28-2013

Why OpenShift has polyglot baked in

Another day, another language. Yesterday, another PaaS provider announced they were adding additional language support to their PaaS—in this case, supplementing their initial .NET PaaS with Java. Such moves have become something of a pattern. Many of the initial hosted PaaS offerings were unabashedly monolingual. 

Engine Yard began with a Ruby on Rails focus, but has since added PHP and node.js. Google App Engine initially supported a variant of Python but now does Java and Go too. (Go is a Google-developed language that aims to provide the efficiency of a statically typed compiled language with the ease of programming of a dynamic language.) AppFog recently discontinued their PHP-only PHPFog platform. Even Microsoft's .NET-centric Azure PaaS has added Java.

OpenShift architecture

I can't say I'm surprised. Whenever Red Hat has conducted surveys about intended language use in the cloud--whether private, public, or hybrid--we've always seen a great deal of diversity in the answers. (As well as considerable correlation with the languages those taking the survey are currently using.) Given those facts, it seems unlikely that most enterprise development shops would be interested in adopting a service that limited them to a narrow set of languages or frameworks. 

This isn't to say that enterprise software development is completely ungoverned. (Though sometimes it seems that way given the breadth of tools in use.) In fact, as I discussed previously, one of the big attractions of PaaS for enterprise architects is that it provides opportunities for standardizing development workflows and thereby make both the initial development of and subsequent lifecycle management of applications much more efficient. But, standardization notwithstanding, enterprise applications and infrastructure are heterogeneous. And that means a polyglot development environment is a must.

Which is the approach Red Hat has taken with OpenShift from the beginning—whether we're talking the OpenShift Online hosted service or the OpenShift Enterprise on-premise version. (The other thing we hear consistently is that many large organizations adopting PaaS want to run it on their own servers; application development is just too central a task for them to be comfortable running on a hosted service.)

OpenShift is fundamentally architected around choice. "Technologies" (language, databases, etc.) are delivered as cartridges--a pluggable capability you can add at any time. When you create an application, you start with a web platform cartridge to which you can add additional capabilities as you choose. Each cartridge runs on one or more "gears" (basically, a unit of OpenShift capacity) depending on how high your application has been scaled. 

Major open source Web language choices are ready to grab and go: Java EE 6, PHP, Ruby, Perl, Python, and node.js. But you can also build your own cartridges. You can even connect cartridges together. For example, you could have a PHP cartridge in one gear and a MySQL cartridge in another gear. (We're in the process of rolling out a new cartridge design to make building cartridges easier.) 

This ability to extend OpenShift is an important architectural feature that dovetails right into the open source development model and leverages the power of the community. And it's not an afterthought. It's in OpenShift's DNA.

Wednesday, February 27, 2013

PaaS isn't just for developers

Most of the attention focused on Platform-as-a-Service, PaaS, is on its impact on developers. That's understandable. After all, developers are the ones "consuming" PaaS in order to create applications. In fact, as I've written about previously, Eric Knipp of Gartner goes so far as to call today "a golden age of enterprise application development"—in no small part, because of PaaS. Developer productivity is incredibly important, given that businesses large and small depend on information technology more than even. And, while much of that IT can and should come from pre-packaged software and services, plenty needs to be customized and adapted for a given business, industry, and customer set.

As, my Red Hat colleague Joe Fernandes discussed in a recent podcast:

For developers, Platform‑as‑a‑Service is all about bringing greater agility and giving them a greater degree of self‑service, really removing IT as the bottleneck to getting things done. In public PaaS services like OpenShift, developers can come and instantly begin deploying applications. They can choose from a variety of languages and frameworks and other services like databases and so forth. And they don't need to wait for systems to be provisioned and software to be configured. The platform is all there waiting for them, so they can be productive much more quickly. And really, what that means is that they can focus on what matters most to them, which is really their application code. They can iterate on their designs and really see the applications up and running without having to worry about how to manage what's running underneath.

But, as Joe also discussed, PaaS isn't just for developers. As we start to inject Platform-as-a-Service into enterprise development environments—often in the form of an on-premise product such as OpenShift Enterprise--it helps system administrators and system architects too. 

Consider first the IT operations teams, the "admins" in the vernacular. They're tasked with supporting developers. They're the ones who have historically had the deal with the help desk tickets filed to request new infrastructure for a project. They're also the ones who get bombarded with increasingly irate questions about why the new server hasn't been installed and provisioned yet. Of course, virtualization and virtualization management has helped to some degree but they've generally reduced the internal friction of the process, rather than fundamentally changed it.

A PaaS on the other hand, allows admins to focus up-front on basic policies (such as whether to use a public hosted service or to deploy in-house) and to work with developers on defining which standardized environments they need. At that point, self-service and automation (under policy) can largely take over. The "machinery" can scale the apps, deliver new development instances, isolate workloads, and spin down unused resources—all without much ongoing involvement by the admins.

Of course, if it's an on-premise environment, they'll still need to manage the underlying infrastructure but that's the price on having more direct control and visibility than with a public shared service. An IT operations team has to manage this infrastructure efficiently and securely. PaaS can help with that too.  For example, OpenShift goes beyond server virtualization by introducing the concept of multi‑tenancy within a virtual machine using a combination of performance and security features built into Red Hat Enterprise Linux. 

As for enterprise architects, they're are trying to marry the IT infrastructure, IT operations, and application development methodologies to the needs of the business. So they have to understand where the business is going and how IT is going  to architect their infrastructure, their applications, and their processes to address those needs. This in the face of tremendous growth in demand from the business for new services, new back-end applications, new mobile applications, new web services and more. It falls on enterprise architects to help figure all this out.

One way PaaS helps is that it lets them standardize the developer work flows, that is the process that IT needs to go through every time that a developer starts on a new project. Get them provisioned with the infrastructure they need, with the software they need, so that they can start either developing or doing testing or performance testing, or even deploying those applications all the way through to production. The result is not only faster application development but less fragile and error-prone application architectures—attributes that are especially important as we move toward more modular and loosely couple software. 

As Joe put it to me:

You're never going to eliminate the role of the IT operations team in an enterprise context. What you need to do is figure out how the operations team can work more effectively with the development side of the house to meet the needs of the business. To me, it's not dev or ops. It's really both. The developers aren't going to take over the job that the IT operations team does any less than the IT operations team is going to be able to build and deploy their own applications and so forth.

The question is, how do both sides work more effectively together? How do they reduce friction and really help accelerate time to market? Because, ultimately, that's all the business cares about. Business cares about when they can get their new service and how quickly they can start leveraging that, whether it's an internal or external application that they're looking for, and it's incumbent on IT organizations, operations team, as well as developers, to help figure that out. That's really what we're trying to do with Platform‑as‑a‑Service: drive that process forward.

Tuesday, February 26, 2013

Links for 02-26-2013

Balancing the desires of users with the needs of enterprise IT

Forrester's James Staten has a typically smart blog post up called "Why your enterprise private cloud is failing." It's based on "The Rise Of The New Cloud Admin," a report that James co-wrote with Lauren Nelson. James writes:

You're asking the wrong people to build the solution. You aren't giving them clear enough direction on what they should build. You aren't helping them understand how this new service should operate or how it will affect their career and value to the organization. And more often than not you are building the private cloud without engaging the buyers who will consume this cloud.

And your approach is perfectly logical. For many of us in IT, we see a private cloud as an extension of our investments in virtualization. It's simply virtualization with some standardization, automation, a portal and an image library isn't it? Yep. And a Porsche is just a Volkswagen with better engine, tires, suspension and seats. That's the fallacy in this thinking.

To get private cloud right you have to step away from the guts of the solution and start with the value proposition. From the point of view of the consumers of this service - your internal developers and business users.

The post and report do a great job of articulating why a private cloud isn't just an extension to virtualization. It may leverage virtualization and build on virtualization but the thinking and approach have more differences than may be readily apparent if you're just focused on the technology. 

I think about it thusly. Even though it's about virtual rather than physical, the fundamental virtualization mindset is still about servers. Whereas, with cloud, that mindset should shift to delivering IT services to users. That's a big difference. (This shift is discussed in more detail in both Visible Ops Private Cloud: From Virtualization to Private Cloud in 4 Practical Steps by Andi Mann, Kurt Milne, and Jeanne Moraine and in my new book Computing Next: How the cloud opens the future.)

To make things a bit more concrete, here's another way of looking at the expectations for a private cloud.

Slide1

Public clouds were initially largely a grassroots phenomenon. Users voted with their credit cards for IT resources delivered in minutes, not months. They voted for freedom from restrictions in the type of software they could run. They voted for easy-to-use interfaces and fewer roadblocks to developing new applications.

When an enterprise builds a private or a hybrid cloud, it needs to preserve the goodness that drove its users, often developers, to the cloud in the first place. It may well need to balance these desires with legitimate governance, consistency, compliance, and security requirements. But it has to do so without effectively throwing out the cloud operating model and going back to business as usual.

As RedMonk's Stephen O'Grady told me in a recent podcast:

If you're looking to reign in or at least gain some visibility into usage, you basically have two choices. You can try to say, "No, you can't do this and you can't use these tools." As I've said, that's an effort, in my opinion, doomed to failure in most cases. The alternative is to say, "I understand that there are reasons and very legitimate business reasons that you're doing what you're doing. I'm going to try to go along with that program as much as I can. In return for that, I want visibility into what's going on." In other words, trying to meet developers halfway and having them do the same.

This is where open, hybrid cloud management comes in. I'm going to discuss the components of this management, as implemented in Red Hat CloudForms and ManageIQ,  in greater detail in an upcoming post. But, for our purposes here, open hybrid cloud management is fundamentally about balancing what users/developers want and what enterprises need. It's about offering the user experience of the public cloud within the policy framework of enterprise IT.

Sunday, February 24, 2013

Gordon Haff author interview: Computing Next

I talk about the general framework for the book, how I went about writing it, and some of the more interesting topics in cloud computing today. You can read more about it on this page.

Decisions I made when publishing my new book

I spent most of yesterday making a final (hopefully) set of tweaks to my new cloud computing book—or at least final until such time as I decide substantial revisions (i.e., a new edition) are called for. As I'm sure everyone has experienced in their own way, getting to 100 percent (or as close to 100 percent as reality allows) always takes far more time than it seems as if it should. Especially when you are getting profoundly sick of the whole enterprise and just want it over with. I'm going to give everything a few days to settle but here's hoping that, once everything percolates through Amazon's system (they seem to use something of an "eventually consistent" model for their publishing platform as for other things), I'll be able to call the production end of things a wrap and feel confident promoting to a broader audience.

Given that, I thought I'd take the opportunity to do something of a postmortem, both in the interest of sharing possibly useful information and to document a few things for myself. This is, by no means, intended to be a definitive guide to publishing a book. Rather, it highlights things I learned in the course of this experiment.

As many of my readers know, by day I am Red Hat's cloud evangelist. Thus, this book was only a pseudo-personal project. I wrote much of it on my own time, but it leveraged a fair amount of material I had previously written for one outlet or another (blog posts and the like), as well as material others wrote and which they gave me permission to use. My goal was to pull together my thinking on cloud computing and related trends within this context. It wasn't and isn't really focused on profit.

Up-front decisions:

Publisher. I decided to publish through the Amazon CreateSpace publishing platform. To some degree, this decision came about from following the path of least resistance. The timeline for this project expanded and contracted based on a variety of external factors as I rethought various topics. Some of my thinking about the best way to approach certain aspects of the book also morphed. Certainly a publisher could potentially have helped me through some of these questions. At the same time, they'd also likely impose constraints, based on marketability which, as I've indicated, wasn't a top priority for me. At the end of the day, I felt comfortable tackling the project on my own and it just seemed easiest that way. (At one point, we did consider making this an "official" Red Hat project, but that didn't come to fruition for a variety of reasons.)

Length. My initial thinking was that my book should be a "normal" length, which my research suggested was somewhere around 60,000 words. I've come to think that, while there are reasons to have the heft of a typical book, it's not really necessary—at least in the context in which I was working. I recently wrote a post about how short books are more practical today than in the past. That said, although I trimmed some material that I came to see as filler, I added other material. And I rather liked the "guest posts" others let me use even if they were partly there initially to pad things out.

Organization. One of my colleagues, Margaret Rimmler, suggested the idea of short chapters based on the look of Jeremy Gutsche's Exploiting Chaos. That basic concept fit well with leveraging existing relatively short (1,000 word or so) blog posts and the like. The chapters in my Computing Next are longer and tone is considerably different but I did ultimately stick with the idea of having chapters that are relatively standalone.

Format. Where I broke considerably from Exploiting Chaos' look and feel was in my approach to graphics. I initially was headed down the road of having a graphically rich book with lots of full-bleed photographs and the like. However, I came to rethink this approach. For one thing, I realized it was going to create quite a bit of incremental work and cost; I'd have to use a full desktop publishing program like InDesign or Scribus (with which I had just about zero familiarity in both cases) and I'd need to print the book in full color. For another, much of the work would be irrelevant to the Kindle version. In the end, I decided to primarily just include graphics that were directly relevant to the book's content.

Footnotes. I struggled with this one a bit. I really like using footnotes in my writing. Not so much for the purposes of exhaustively citing sources, but as a way to provide additional background or context without breaking up the flow of the writing. Unfortunately, footnotes on a Kindle aren't ideal as they're essentially hyperlinked endnotes that tend to take you out of the flow more than a digression would. That said, I decided to just use footnotes anyway because it's what I'm used to.

Tools. With the book now primarily text, there was no particular benefit to working in a program that let me see that text as it would appear on the printed page. (I'd need to format it eventually, but the writing now didn't need to reflect layout considerations to any significant degree.) I ended up using Scrivener on my Mac. One of the really nice things about Scrivener is that it's very easy to work on, label, group, and rearrange individual chapters—a great match for the style of my book. Once I got to a mostly complete first draft, I exported the text into iWork Pages and then worked on it in that format for the balance of the project. In general, I find Pages is less annoying than Microsoft Word in a variety of ways although, as we'll see, I did ultimately export from Pages to Word in order to create the Kindle version.

Editing:

Several colleagues read through the manuscript with greater or lesser degrees of rigor. I did a front-to-back fine-tooth comb read after the manuscript was complete and integrated. Furthermore, a decent amount of the content had been previously edited in some form or other. In spite of all this, I decided to engage a copy editor (a former intern of a magazine editor acquaintance of mine).

Lots of corrections. To be sure, some of them were stylistic nitpicking but also corrections of no small number of grammatical errors and misspellings. I can't say I was really surprised, having been writing and being edited for many years. Past a point, you just start reading what you expect to read and not what's actually on the page. The lesson? You absolutely must have a copy editor do a thorough review. And, in general, even friends and acquaintances who are good writers mostly won't read an entire book with the care needed to really clean it up.

(I initially considered hiring someone who would be better equipped to edit for content, tone, flow, etc. but the couple possibilities I had in mind didn't pan out and I didn't really want to spend a lot more money.)

Cover:

The cover arguably makes less difference to Amazon purchases than it does in a book store. Nonetheless, you want something that looks professional. I downloaded a template from Amazon and worked on it in Adobe Photoshop Elements. (I'm certainly not a design professional, but I do have some design background and training.)

Kindle:

Creating a Kindle version wasn't as straightforward as I had hoped/expected. If you expect to just take your CreateSpace PDF and upload it to Kindle Direct Publishing and have life be good, you're probably going to be disappointed. I'll probably do a separate post on this, but I'll note here a few specific issues I had.

  • Small inset photos won't display that way in the Kindle version. (These were head shots of the guest authors in the case of my book.) I ended up just taking out all of these photos, as well as photos in the section breaks that were just there for graphical interest.
  • You may have to manually create page breaks.
  • Depending upon the word processing program, you may have to manually change certain styles to be explicitly bold or italics as opposed to using a bold or italic font. (In other words, if a heading uses the Gill Sans MT Bold font rather than Gill Sans MT with a bold setting in the word processor, it won't display as bold on the Kindle. (It doesn't help that Word seems to make some of these substitutions on its own.)
  • You may have to create a Table of Contents manually depending, seemingly, on the phase of the moon. You basically do so by inserting a "toc" (without the quotes) bookmark where you want the Table of Contents to be, inserting the text you want in the Table of Contents (without line numbers), and then creating a hyperlink for each line to a bookmark at the corresponding chapter. Yes, it's a pain in the neck. It's probably best tested by downloading the mobi file created when you upload your draft Kindle book to Amazon and opening it with a Kindle or Kindle app. 

The good news is that, for many books, it doesn't seem as if you need to get all down and dirty with HTML or ePub code; you can just stick to your word processor. But, in my experience, you are going to have to adapt your the document created for the print edition to display nicely on a Kindle. (And it makes sense to think about the Kindle version as you're designing the book.)

Thursday, February 21, 2013

Links for 02-21-2013

Wednesday, February 20, 2013

Datamation Google+ Hangout on private/hybrid cloud

Fun discussion with myself, James Maguire, Andi Mann, Kurt Milne, Mark Thiele, and Sam Charrington. We talk about roadblocks to building a cloud, whether they lower costs, how to get started, and whether openness is important. (You can probably guess my take on that last point.)

Links for 02-20-2013

Monday, February 18, 2013

Links for 02-18-2013

Friday, February 15, 2013

Podcast: Redmonk's Donnie Berkholz talks Big Data


Donnie Berkholz describes himself as RedMonk's resident Ph.D.

He spent most of his career prior to RedMonk as a researcher in the biological sciences, where he did a huge amount of data analysis & visualization as well as scientific programming. He also developed and led the Gentoo Linux distribution.

In this podcast, he discusses the impact of Big Data, why you need models, how to get started in Big Data, and what we'll be saying about the whole space five years from now.

Listen to MP3 (0:09:37)
Listen to OGG (0:09:37)

Transcript:


Gordon Haff:  Hello, everyone. This is Gordon Haff, cloud evangelist with Red Hat. I'm sitting here at Monki Gras, with Donnie Berkholz, who's an analyst at RedMonk. As well as having done lots of other exciting things. Donnie, why don't you tell people a little bit about yourself?
Donnie Berkholz:  Sure. I've been at RedMonk for a little over a year now, as an analyst. My history's actually pretty different from most people in tech. Because a year ago, I was a scientist doing drug discovery at Mayo Clinic. It turned out that I was having more fun doing all the scientific programming to enable that drug discovery and working with all the data involved in scanning lots of drugs through computers that I decided, "I want to work on that as my job." I didn't care so much about the drug discovery itself anymore.
Gordon:  You've also done a little bit with Linux, haven't you?
Donnie:  Definitely. I've been working on open source software for about 10 years now. Mainly on a Linux distribution called Gentoo, but also on a number of other projects and learned a lot about how to lead projects without authority and how to deal with community problems, manage communities and all that kind of thing.
Gordon:  We're going to talk a little bit later about some of the intersects between open source and big data, which is a pretty huge intersection. But first, let me start off with something that's maybe a little bit provocative. There's all this talk about data now. I remember back in mostly the 1990s, there was a lot of talk about something called data warehousing. This was how all this data was going to be collected in business, was going to do wonderful things. In fact, it mostly only did wonderful things for the companies selling the expensive software that was supposed to achieve these wonderful results. Why are things different this time?
Donnie:  Things are different, but they're not nearly as different as people think they are. One of the big differences is dealing better with unstructured data. Another one is that, with the whole trend of data science, you're getting a lot more people who really understand data involved. Instead of just storing the data and querying it by business analysts or data analysts. Now, you've got people who are professional statisticians involved in understanding that data. Modeling it using patterns and trends. Being able to think about data in terms of, not just trends ongoing in the past, but predictive trends, using better statistical models than just drawing a flat line.
Gordon:  I think that's an interesting point because Wired's editor in chief, at the time, Chris Anderson, wrote a rather provocative article, maybe a couple of years ago now, essentially saying that we don't need this model stuff any longer. We have enough data. We have powerful computers. The answers are going to fall out. I may be stereotyping him a little bit. But I don't think that much. What's your reaction to that?
Donnie:  I think it's true for some definitions of models. You're always going to want to understand the data. The only way to really understand things is by modeling them. Just looking at a distribution of a million numbers doesn't give you an understanding of what that data means or how it ties into any statistical distributions. That kind of information lets you much more accurately predict the future. Another point is that modeling, in the context that I think he meant, he's talking about very simplistic ways of modeling things. But there's much more popular methods now called Robust Statistics. Not just pretending everything can be modeled with a simple average or a standard deviation, but instead saying I don't care what kind of distribution it is. So, throwing out the model in that respect, but you still want to be able to understand that data and represent it in a more abstract, more simple way.
Gordon:  Now, you do have areas like low‑bias models, natural language processing, things like that. In fact, we've had a pretty hard time coming up with the models, and probably have done a better job when Google just crunches through a huge amount of recorded data essentially. Can you maybe give us a little idea of having a spectrum of problems, predominantly those related to business, social networking, advertising, and so forth. What areas can you come closer throwing a bunch of data at the problem, and where are the ones that we really do maybe need to understand the problem better and come up with better models for?
Donnie:  The easiest distinction to make is, how much data you have. If you can throw more data at it, a lot of times it is cheaper to just throw more data at the problem, rather than trying to model it better. But in some cases, you have a very limited data set, and you have nothing else to resort to but to try and model it smarter. One example might be, trying to understand very small subsets of a group, and what those subsets are doing. So, if you imagine that the purchasing patterns of people who are blind, might be very different from the general population, and you want to understand what those people are like and cater to them more effectively. But you're working with a tiny subset of one percent of all of your data, and it might be hard or even impossible to collect more of it.
Gordon:  Let's turn our attention to somebody that wants to get started in this big data space. Maybe they've been a programmer, maybe working in open source for a while. But maybe they don't have a formal statistics background, and they will all get into this area. Or maybe there is someone who is a statistician and wants to get in. Let's talk about some of the open‑source tools, and for that matter knowledge, that they can pick up, that they can work more effectively here.
First of all, for example, we have distributed file systems, which is a big part of large data sets.
Donnie:  Yeah, a place that I would definitely get involved if I were thinking about working in big data, would be with the analytical tools themselves. So tools like R, or like Python, which is actually starting to become a very competitive solution for working with lots of data, with libraries like pandas. Now, there is a lot of work going on to graph more effectively too, not just to do the analysis itself. I'd certainly start there with the analytical tools. Now, on the data side, Hadoop is really the default choice. Much like Github is the default for version control, Hadoop is the default for Big Data.
There's lots of easy ways now to get going with Hadoop whether it's using a Hadoop distribution from one of the popular vendors. Or, looking into something like Project Serengetti, which is designed to help you set up a virtualized Hadoop cluster very easily.
Gordon:  If we're a few years from now and looking back...I know predictions about the future are always hard, but where do you expect we'll have seen the big wins and, conversely, where do you think we might see some disappointment, having come out of this Big Data enthusiasm today?
Donnie:  Five years out, we'll definitely have a much better understanding of what the ROI is from working with Big Data. Because a lot of companies are implementing Hadoop right now, but it's not really clear to them whether it's going to be a five percent improvement or a 25 percent improvement. And whether those costs will outweigh the benefits. One place where we'll see change is there. Another example is, I think, the idea of data science right now is one that's very exclusive sounding. What's going to happen is that's going to become much more democratized. With tools like R and Python becoming increasingly popular, not just within the data community, but in the broader world of everybody who has to deal with data on a daily basis whether that's a business analyst or a software developer.
We're going to see things become much more democratized. We're going to see what the true payoff looks like. Other than that, it's just going to be the same trends continuing. In terms of what we've seen before, with the popularity of GitHub. Everything's going to become more intermingled.
What happens is, the culture's start to mix. Suddenly you get all these interesting benefits that you wouldn't have realized were there, as those two cultures start to mingle.
Gordon:  One of the interesting things with Big Data, in many respects, like much of cloud computing, is how pervasive open source has become here. Linux was the guy on the outside coming in, competing with already proprietary tools that were already in place. In many cases, with Big Data, it's not just the case of the open source tools being more innovative or less expensive. But they're often the default choice, as you put it.
Donnie:  Yeah. A big part of it is, when you have an open source solution you can have many companies collaborating together on it, to make it work much more quickly than you could otherwise. So you end up with something that works next month, instead of next year. This is what we've seen with Hadoop. It's a very effective collaboration between a number of different companies around the Hadoop ecosystem. That enables the user, which in this case is often a developer or data scientist, to start using it much more easily, to get features added much more quickly than they might otherwise. And as you know, developers have a very strong preference for open source. Just by being open source, it already has a step ahead of the competition.
Gordon:  Thank you very much. Anything else to add?
Donnie:  No. Thank you for having me on.
Gordon:  Great. Thanks, Donnie.

Thursday, February 14, 2013

Podcast: Redmonk's Stephen O'Grady on developers, The New Kingmakers

Redmonk co-founder Stephen O'Grady has a new book out, The New Kingmakers. It argues that developer influence has greatly increased because of open source and other reasons. I used to work with Stephen we we were both at Illuminata and he has great insights about how developers work and what they're interested in. I encourage everyone to read the book. The price is right (free), but you shouldn't take that as an indication of the book's value. It's a well-written, focused that should be read by anyone interested in how developers and their associated ecosystems have evolved.

I caught up with Stephen at Monkigras in London a couple of weeks ago. In this podcast, Stephen discusses the central thesis of his book and we spar a bit over the question of to what degree companies like Apple are really catering to the needs of developers (as opposed to the developers just going where the money is).

MP3 version (0:22:04)
OGG version (0:22:04)

Transcript:


Gordon Haff:  Hello, everyone. This is Gordon Haff, cloud evangelist with Red Hat. I'm sitting here with Stephen O'Grady, co‑founder of Redmonk and Boston Red Sox fanatic.
Stephen O'Grady: Indeed. How are you doing, Gordon?
Gordon:  Stephen, you just came out with a new book, "The New Kingmakers." First of all, congratulations. It's a lot of work. Maybe you could summarize the thesis of your book for us?
Stephen:  Essentially, the basic idea is actually pretty simple. What we've seen over the past decade is a number of different, related but distinct, technology trends in terms of things like open source. Things like cloud, software as a service, bring your own device. Which again, are related but quite distinct, have together conspired to introduce a shift in terms of the power structure. The net of that shift, ultimately, is that developers today are in control of their environment in ways that they simply were not a decade or more ago. They make their own decisions, as far as technology, and they don't have to ask for permission to get an operating system, get a database. With the cloud and with applications, to use an application they can go to software as a service.
To get hardware, they can go to the cloud. They're fundamentally enabled in a way that we really haven't seen before.
Gordon:  Can you give some specific examples?
Stephen:  The one that, I think, resonates with people probably the best is...I was a systems integrator before becoming an analyst. In the '90s, we would work with large businesses and small businesses. A couple startups here and there, to help them build out their technologies. If you think about the late '90s, one of the things that had to happen, for a startup to build out its infrastructure, was that they had to get funding because they needed to pay for a lot of technology. In other words, if you're going to start a business, typically one common pattern for businesses that wanted to scale out, would be...
They get hardware from Sun, who would also supply the operating system. They get a database from Oracle. They go out and get storage from somebody like EMC and so on. You look at the startups today and they're using a completely different stack. They're typically purchasing, essentially, infrastructure from a cloud provider, either Amazon or otherwise.
A lot of their infrastructure is free in terms of they're using Linux. They're using MySQL. They're using some combination of programming languages which are, themselves, open source. What that means is that the developers who used to have to ask permission to do anything, to start a business, or within the context of a larger business, to start a project.
They don't have to ask for permission anymore. They have all these tools. Really anything they want. Whether it's hardware, operating systems, database, virtualization layer, tools, there are lots of options available to them that cost nothing. Therefore don't require procurement. They don't require permission. Unfortunately for businesses, they're not necessarily subject...
They have the ability to bypass compliance, etc, restrictions and constraints. It's a very different environment than it was a decade ago.
Gordon:  To your last point, that's sort of an interesting one you bring up thought because it does raise the possibility of development being done in this ad hoc environment. Then when it comes to go into production, maybe that's not an easy process that hasn't been thought out.
Stephen:  Sure. It's sort of a recent trend. It's commonly referred to as Shadow IT or Rogue IT. What it refers to are pockets within an organization, who operate very independently and, in some cases, at odds with centralized IT. One exempt, you see this all the time in marketing departments. Marketing will go to IT and say, "Hey, I want a new site." Or, "I want an application that will track sales of my product." Or whatever. IT will come back and say, "That's fine. We'll get to that next year. We'll get to that in six months." The marketers say, "Look, this can't be that hard." In some cases they'll hire, but in many cases they find a couple of under‑utilized resources, in terms of developers, internally and they'll effectively spin up their own infrastructure.
They'll build this application. They'll build the website. They'll deploy it, essentially, with no input or help or assistance, et cetera, from IT. In the case of many marketing efforts, in this example, that's typically fine. You're not necessarily worried about the compliance implications or regulatory implications of sites like that where organizations are very concerned.
For example, if you're in finance, if you're in insurance, if you're in health care, most of your business is going to be subject to some form of regulation, some form of compliance requirements. If you have organizations that are, as I said, operating independently and paying less attention to these regulations and less attention to the constraints that are imposed for security reasons, or privacy and so on...
Then that poses an issue. That's an issue that a lot of businesses are concerned with today.
Gordon:  What do you see as a fix for that? Some hybrid IT management, in some way? I guess the challenge here is, don't throw the baby out with the bath water. Keep this flexibility while still meeting any regulatory and other things you have to meet.
Stephen:  You certainly don't want to throw the baby out with the bath water. Trying to push developers backwards is a bad plan for any number of reasons. Not least of which you'll probably lose some of your better resources, if you try to go backwards in time and make them subject to the same restrictions they were a decade ago. What we recommend, when we talk to people about this, is to try to understand why they're using these tools. In other words, why do people use the cloud? Again, there are many reasons. One of the simplest is the fact that you can deploy a server in 60 or 90 seconds. You contrast this versus centralized IT, who in many cases, is happy to be able to deploy a server in the same day...
That's a pretty big delta. If you're looking to reign in or at least gain some visibility into usage, you basically have two choices. You can try to say, "No, you can't do this and you can't use these tools." As I've said, that's an effort, in my opinion, doomed to failure in most cases.
The alternative is to say, "I understand that there are reasons and very legitimate business reasons that you're doing what you're doing. I'm going to try to go along with that program as much as I can. In return for that, I want visibility into what's going on." In other words, trying to meet developers halfway and having them do the same.
We've actually seen businesses do this where rather than trying to ban use of Amazon. They say, "That's fine. But is has to go on our central account." At that point, the developer gets to use Amazon and IT gets the advantage of at least knowing what they're consuming by having centralized billing and allocation. Those are the kinds of tradeoffs I would expect businesses to make moving forward.
Gordon:  What role do you see platform as a service has going forward, in terms development of enterprise applications, in terms of enabling developers?
Stephen:  My old colleague, Michael Coté, came up with a model for cloud services that we love. It basically is... His original thought looks like a cheeseburger. You have a couple pieces of bread and a burger in the middle. Ultimately, the model tries to explain software as a service, infrastructure as a service and platform as a service. It was addressing them according to their equivalents. As an example, Software as a Service, we consider quite accurately to be the modern manifestation of applications. Back in the day, when you might have a PeopleSoft… or all these applications that somebody would come and deploy, instead now you consume them as a service, in a browser. Infrastructure as a service looks a lot like what we used to just call traditional infrastructure.
Servers and storage and all the other pieces associated. Platform as a Service...Ultimately, the closest equivalent, in terms of the architectures that we're most familiar with, is middleware. It's a container that tries to make your application portable, from environment to environment, platform to platform, and so on. Longer term, I think that's the potential.
It isn't there yet. In the sense that we haven't seen developers really embrace these platforms in a volume sense. We've seen very specific and tactical interest. But none of them have the visibility of, for example, the old LAMP stack. None of them are that far along in that process. But over time, that's ultimately the role that we expect them to play in any infrastructure setting.
As I said, it's the need for a container that makes it easier. Not easy, but easier to port an application from one place to another, more easily...The demand for that will always be there.
Gordon:  I wouldn't be analyst and indeed former colleague of yours, Stephen, if I didn't push back on your thesis at least a little bit. In your book you give Apple as an example of a company that has embraced developers, in a sense. Taking out the cover web page in iTunes to thank the developers for all the money they've brought Apple which, indeed, they have. But in fact Apple's been pretty widely criticized for rather unfriendly developer practices. Like, "Maybe they'll approve my app. Maybe they won't. And they won't tell me why." In fact, aren't developers just going to a company like Apple for the same reason Willie Sutton went to banks, because that's where the money is?
Stephen:  Chris DiBona, from Google, said it very well. He was talking about Android, but I think the same is true about Apple. Which is, there's a linear relationship between developer interest and the number of devices that are shipped. We see this with Apple's interest in devices like the iPad, the iPod Touch and the iPhone. Ultimately, they ship a lot of them and developers are therefore interested. My contention, with Apple, isn't necessary that they're treating developers well. I think it's difficult, if not impossible, to make that argument. As you note, they have, in many cases, done the exact wrong thing, in terms of working with developers. Their behavior, with respect to the app store, has frustrated tons of developers.
But the interest is still there, because of the volume. My point, rather with Apple, at least in the context of the book, was that Apple at least understands enough the importance of developers, to do some things very right. One of those is taking out a full page ad on their website to thank developers which does two things.
First of all, superficially, and I don't generally believe that Apple has a genuine sentiment about developers. I think it's like any other business, self‑serving. But they understand the importance of developers. So they thank them. You get the benefit there. But more importantly, it also reminds them just how many devices have shipped.
That ad served the duel purpose of thanking developers on the one hand, and reminding them of how big the opportunity was at the same time. But at the end of the day, Apple's done many things right, with respect to development. They make the process of developing applications reasonably easy. They certainly make it easy to generate good looking applications.
A lot of the applications on the Apple Store are great looking. Apple's history with developers is uneven, but they've done more right than they've done wrong. I think their success reflects that.
Gordon:  And certainly their interaction with outside developers, compared to the traditional carrier model, that's a big difference.
Stephen:  Yeah. Frankly, I think that's one of the things that Apple is unappreciated for. In the sense that Apple broke, for the first time in the industry, the carrier lock on the customer. Apple controlled that relationship with the customer in ways that we haven't seen before. Very much like if we think back to music, Apple was the first technology company to stand up for users and say, "Look, yes we have these DRM technologies. But you can't just...We're not going to kowtow to the record company and let the record company dictate all the terms." The result of which is, basically, a non‑usable product. We saw that over and over again. Apple put their users first and that user relationship first, and fundamentally changed that. We've seen the same thing in the case of the iPhone. As you note, they've fundamentally broken that tight connection to the carrier.
Now Apple can have a relationship directly with developers that it wouldn't, for example, if it had to go through a carrier to manage that relationship with a developer.
Gordon:  At the end of the day, this isn't a Kumbaya world where developers rule everything. But if we look in the flow of history, compared to where things were 10, 15 years ago, it really is a different world?
Stephen:  It's a very different world and I think, for better and for worse, in some cases. Much more rapidly innovating world. When you remove the shackles from developers and you let them innovate at the speed that they want to innovate, not surprisingly, you're going to see a lot more innovation. That's going to continue.
Gordon:  Thank you very much, Stephen. Again, Stephen is the author of The New Kingmakers. Where can someone get this book?
Stephen:  All the information about the book, as well as links to it, is at thenewkingmakers.com.
Gordon:  Great, thank you, Stephen.
Stephen:  Thanks, Gordon.