Thursday, May 31, 2012

Links for 05-31-2012

Tuesday, May 29, 2012

We're going to have some exciting discussions at ODCA Forecast 2012!

Manhattan is going to be cloud computing central from June 11 to 14. 

Cloud Expo is something of the center of gravity, but there's plenty else going on. CloudCamp NYC is on the evening of June 12. It's free and they're always great events with lots of interaction. On Wednesday, Rishidot Research is organizing DeployCon, an Enterprise PaaS Summit. This new event cuts right to one of the hottest topics in cloud computing--bringing together the developer-friendliness of PaaS with the operational needs of enterprise IT. 

I'm going to try to spend some time at all three of those events, but what I'm most focused on is the Open Data Center Alliance's Forecast 2012 conference on Tuesday. The ODCA is a consortium of major IT organizations including, notably, large end-users; it's no vendor smoozefest. The idea is to identify customer requirements and to influence industry innovation to address those requirements. One such example is virtual machine interoperability, as is demonstrated in this Red Hat video using our CloudForms open hybrid cloud management software.

The event has a great lineup. For my part, I'll be a panelist at a pair of Forecast panels: one on software innovation and one on regulation. I'm excited about these panels for two reasons.

The first is that the organizers are doing a bang-up job of doing their best to ensure that these panels don't embody all those things that make us dread panels. You know what I mean. By the time everyone is done clearing their throats and telling you how smart they are, there's only time left for all the panelists to give more or less the same long-winded answer to a couple of desultory questions. No panelist slides on these ones. And I can assure you that the panelists are now quite familiar with terms like "rapid fire" and "interactive" as we've gone through our prep calls. The sad thing is that panels often have great potential. That potential is just so rarely achieved. I'm hopeful for these ones.

The other is that the topics for the two panels on which I'll be sitting are really interesting to me. I'm not going to steal my thunder in advance here, but I wanted to share a few thoughts in advance.

Rapid Fire Panel: Cloud Regulation (2:35-3:20)

Moderator: Deborah Salons

Panelists:

Brett Smith, Deutsche Bank

José E. González, Chief Business Development Officer, Trapezoid Digital Security Services, LLC

Gordon Haff, Cloud Evangelist, Red Hat

Marvin Wheeler, Chairman and Secretary, Open Data Center Alliance

My thoughts:

There are a lot of aspects to regulation but, given that I am neither a lawyer nor a governmental affairs expert, my real interest here is where regulation and technology interest. From the perspective of cloud computing--and cloud computing within large organizations in particular--my concern lies with questions such as how appropriate policies can be embedded in applications and control mechanisms so that automated processes don't run afoul of regulatory regimes. This is an important question because automation means "Hands Off!" Start interjecting manual processes to deal with regulatory requirements and you can't realize the benefits of automation.

Cloud Software Innovation Panel (1:50-2:35)

Moderator: Richard Villars, Vice President, Information & Cloud IDC

Panelists:

Elad Yoran, Chairman and CEO, Vaultive, Inc.

Gordon Haff, Cloud Evangelist, Red Hat

Greg Brown, McAfee

John Engates, CTO, Rackspace

Reuven Cohen, Senior Vice President, Virtustream

My thoughts:

This should be an interesting discussion. On the one hand, some question whether open source matters in the cloud even while almost all of the top public clouds have open source as their foundation. Their argument revolves around the fact that availability of source code doesn't have the same meaning in a world where software services are often delivered over the network and are intimately tied to the data and compute infrastructures on which they run. However, I'd argue that open is both a bigger and a broader issue in cloud computing as I wrote in Why the Future of the Cloud is Open.

How does this relate to software innovation? It relates because, even if open source historically was often about cheaper substitutes for expensive proprietary software, it's now more and more about fostering innovation through communities, including user communities. Is open source the only mechanism through which innovation can happen? Of course not. But it's a powerful mechanism as are other aspects of openness such as open APIs.

Friday, May 25, 2012

Links for 05-25-2012

  • Doc Searls Weblog · After Facebook fails - "But totally personalized advertising is icky and oxymoronic. And, after half a decade or more at the business of making maximally-personalized ads, the main result is what Michael calls “the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles.”"
  • Kodak was never going to be the Kodak of digital photography | Challengers - CNET News - "It's also not as if some other company has emerged as the Kodak of digital. In the company's golden age, consumers used a Kodak camera to take pictures on Kodak film, which they then had processed by a Kodak lab on Kodak paper. Today, no single company has replicated that ecosystem in digital form. There are camera manufacturers, and memory-card manufacturers, and printer companies, and photo-sharing sites--and all of these businesses benefit from healthy competition among multiple major players, rather than the monopolistic position that Kodak once enjoyed."
  • Facebook IPO Post Mortem – Killer – but not for the reasons you think ! « blog maverick - Cuban hits some key points. It's even harder to monetize with ads in mobile and he (I believe correctly) notes that we're moving away from all-you-can-eat mobile data which will make the situation even worse.
  • Stealing the Shows - TIME - RT @poniewozik: Fox suing over Auto Hop ad-skipper. I have a (pre-suit) column on it this week: my life as a TV thief.
  • Secret memo reveals which telecoms store your data the longest | Ars Technica - Not sure I buy that only Verizon retains text message contents. The others must store it somewhere for some length of time in order to transmit it, e.g. to phones that aren't currently on.
  • The Facebook Fallacy - Technology Review - Some good discussion in comments here about limitations of gaining insights from even large qtys of data.
  • Java creator unhappy with Oracle trial outcome | ITworld - Seems to sum things up pretty well (though I'd leave out the word 'sadly'). -- "Based on testimony in the trial, and remarks from Gosling and others, it's clear that whether they meant to or not, Google very much irked people at Sun Microsystems when Google decided to bypass Java and go with a clean-room implementation of Java in the form of the Dalvik VM in Android. And "irked" is probably an understatement. But sadly, you can't really sue people for being jerks. And there's also the argument that Sun may have forced Google's hand by dual-licensing Java under the GPL and a proprietary license for commercial use. That was certainly Sun's prerogative, of course, but it doesn't completely jibe with their much-touted "Java is free" mantra."
  • Create Custom Skins for Laptops & Netbooks | GelaSkins - This is very cool. You can create laptop skins with your own photos.
  • Update: Mike Lynch leaves HP Autonomy - ComputerworldUK.com - RT @maslett: "the entire management team and 20 percent of all Autonomy's staff have left since the HP takeover"
  • Opposing the New York Public Library - The Daily Beast - RT @thedailybeast: A battle for New York Public Library's (@NYPL) soul prompts the question: Who's reading the books?

Wednesday, May 23, 2012

Links for 05-23-2012

Thursday, May 17, 2012

Podcast: Kurt Milne discusses how organizations are building clouds


Kurt Milne is the managing director of the IT Process Institute and has been surveying organizations about their cloud adoption. The most common strategy is to leverage existing resources where possible by taking an open, hybrid approach. Kurt is also co-author, along with Andi Mann and Jeanne Morain of Visible Ops: Private Cloud. Kurt discusses:

  • Strategies for cloud adoption
  • How open cloud approaches are proving most popular
  • How organizations are achieving agility through self-service
  • Owning vs. renting capacity
  • How IT decision makers need to look at cloud through a framework of cost, benefit, and risk
  • The need for a portfolio view
Listen to MP3 (0:14:44)
Listen to OGG (0:14:44)

[TRANSCRIPT]



Gordon Haff:  You're listening to the Cloudy Chat Podcast with Gordon Haff.

Gordon:  Hi, this is Gordon Haff, cloud evangelist with Red Hat and I'm out here in Silicon Valley at the Open Cloud Conference. I'm here with Kurt Milne, who is the managing director of the IT Process Institute. Hi, Kurt.

Kurt Milne:  Hello. Glad to be here.

Gordon:  Kurt, could you maybe tell us a little bit about yourself and the IT Process Institute?

Kurt:  The ITPI is an independent research organization. We use empirical evidence‑based studies to try to identify who are the top performing IT organizations and what they do that's different from other folks. We do studies. We write white papers, prescriptive patterns. We also self‑publish the Visible Ops books. A lot of folks have seen the little yellow and black Visible Ops book. We've got Vis Ops Security, and also most recently is Visible Ops Private Cloud.

Gordon:  Kurt, you've started doing some survey work around cloud adoption. Rather than just getting opinions about where things are going and coming up with some numbers, this study that you've been working on really tries to correlate what people are doing with the results they are seeing, which seems pretty interesting. I know it's still pretty early for the analysis of data and finalizing things, but I'm wondering if you could share some of the interesting things that you're finding.

Kurt:  We fielded a survey through the folks at Cloud Camp and also at some other sources. We got responses from about 150 companies that have deployed private or private‑hybrid cloud past the proof‑of‑concept stage. As you mentioned, the goal is to try to look at what were people actually doing. What were some of the pre‑conditions before they started their cloud project? What were some of the key dependencies during their project? Then, what were the results that they achieved? Both positive results and project friction points. The idea is to try to figure out what are those combinations of factors that contribute to cloud project success.

Gordon:  Maybe you could share to start off with, have you found any surprises so far?

Kurt:  Well I think one of the surprises is that there are a lot of organizations, about 40 percent of the respondents, that suggested that their primary cloud strategy or goal is to build an open cloud or a private‑hybrid cloud that leverages their existing assets as much as possible. Another strategic option would be to build what I call a siloed cloud, which would be maybe a cloud that's carved off in the data center for dev and test environment, self‑serve resources for developers, that sort of thing. But the primary strategy was that open cloud, to try to leverage assets as much as possible, and then, tap externally acquired.

Gordon:  That seems to fly a little bit about some of the conventional wisdom at least that we seem to hear out here in Silicon Valley. That the cloud is new and enterprise IT is old, and that the cloud is all about really starting afresh in a greenfield.

Kurt:  Well, I think if you step back and look at it from the CFO or controller's perspective and say, "Why are we renting computing assets from a third party, when we've got underutilized assets internally?" I think that story of, "Let's do what we can with what we've got before we tap external resource pools" ‑‑ I think that still makes business sense.

Gordon:  Great. Yeah. This isn't to say, of course, that people are not making use of Amazon, or they're not putting in new servers, but it really does seem to speak to this idea that we can't just afford to throw everything away and start with a clean sheet of paper. That would be really appealing as an enterprise architect, but it's probably not practical for a lot of enterprises.

Kurt:  Well, there were about 10 percent of the folks that filled out the survey that we got data on that did indicate that that was their cloud strategy, that they're some kind of service provider, that they're building a cloud solution, and that they're not encumbered by existing legacy applications. There are folks out there that have that luxury of being able to start from scratch, and just look at their requirements and not as many constraints, but most organizations have constraints already in place.

Gordon:  Among those enterprises that are building open clouds, open‑hybrid clouds, why are they doing it, and how are they achieving success?

Kurt:  Well, the use cases we looked at that were most frequently deployed by the 40 percent of the companies we looked at that are in that open‑cloud category, the self‑service development and test environments, the self‑service resources ‑‑ so that's really helping achieve an agility state, where folks can tap and get self‑service, on‑demand access to things that maybe they had to go through operations and wait for it previously. There's a self‑service agility element there. But then they're also using the cloud for basic blocking and tackling operations, things like backup, high‑availability disaster recovery as well. Then, we're also seeing a lot of interest, based on the survey, in scale‑up and scale‑out capabilities that may be difficult in a more traditional, static IT environment.

Gordon:  It's interesting, although it's very consistent with a lot of other data we've seen, that cost savings does not seem to be the primary driver here. Not many a CIO is going to tell you, "Oh, I can spend as much money as I want," but that doesn't really seem to be the driving force behind people adopting cloud.

Kurt:  I agree with you. I think there are some efficiency gains, when we look at what were the results of your cloud effort ‑‑ more development efficiency, more run‑time or operations efficiency. I think there are improvements there, but it doesn't really suggest that companies are tapping Amazon and others just for the cost‑savings aspects. It's really the process efficiencies in combination with those agility factors that I think is driving the adoption.

Gordon:  Let's talk about hybrid clouds. I think sometimes "hybrid" has been taken to mean this auto‑magical, super‑fast, dynamic shifting of workloads among clouds, which, frankly, I haven't seen happening very much. But I do see, still, a lot of interest in being able to move between clouds, even if it's done at an administrative level as opposed to an automatic, load‑balancing way.

Kurt:  Yeah. Interestingly, I think in the last couple of years, as cloud has gotten a lot of attention, people have talked about this bursting concept, where you're going to have a workload internally. Then if you have an unexpected or even a planned usage spike that you'll be tapping external resources, but that was the lowest prevalence. When we asked what folks were using their cloud for, that had the lowest percentage of companies indicating that that's what they were using it for. Whereas starting in the cloud as a prototype environment, doing dev and test work and then, once the workload stabilized, bringing it back in‑house, had a much higher rate of response versus bursting out.

Gordon:  That seems to be consistent with particularly what some companies, like Zynga, for example, are doing, this idea that you own the base and you rent the peak.

Kurt:  I agree. I think that is the Zynga model. I think it becomes an owning versus a renting kind of decision, right? In some cases, it makes sense to rent resources, and then, in other cases, it makes sense to purchase it and utilize the resources. I think, in Zynga's case and a lot of the respondents in the survey, there are times when you rent resources, but if it becomes stable and predictable, then it makes sense to buy the assets needed to support those over time.

Gordon:  The interesting point you're making is what you're saying is this becomes, essentially, a financial or a capital‑budgeting decision at that point, OPEX versus CAPEX. When a company decides to rent or lease or buy company cars, for example, they don't have to get a different kind of car, depending upon what their financial model looks like, and that hasn't always been the case in the IT industry.

Kurt:  It's interesting you mention that because I know with Zynga, one of the driving factors of their decision to build a private cloud was that they wanted some capital assets that they could depreciate. I think there's always this talk in IT of operationalizing costs, converting from a capital‑asset cost to an ongoing operations cost. In their case, with no assets, a completely virtual company, if you will, they were actually looking at getting assets to depreciate as one of the drivers for building something, which I thought was opposite of what a lot of people talk about.

Gordon:  The other thing you see in this idea of pay‑per‑use and all. People and companies are very big on pay‑per‑use, as long as it means they pay less than they would have otherwise. They're not so big on pay‑per‑use if it means paying more.

Kurt:  I think the other aspect to that is I think companies want to pay‑per‑use if that cost is tied to revenue. If you've got a model where, adding an incremental customer, you have an incremental cost associated with delivering service for that customer, I think the financial folks like that kind of arrangement. In some cases, they're willing to pay more for service, if that service is tied to revenue that's scaling up and down versus maybe paying less for a fixed asset, where that cost isn't tied to revenue. I think that goes into the decision, as well.

Gordon:  That's also a fairly traditional part of corporate finance. You're willing, at some level, to pay at least a small premium for being able to match up your revenue streams and your costs streams. Basically, this isn't any different from that. Any final points you'd like to make?

Kurt:  Well, I think it's interesting. We're entering the podcast, here, talking about more traditional management methods, but I really do see a lot of these cloud concepts enable new dynamic IT capabilities. IT decision makers continue to be very pragmatic and need to look at all of these things in a framework of benefit, cost and risk. I think these new capabilities have different cost and risk factors, but there's no magic here. It all needs to be viewed from a pragmatic lens, in order to make decisions on what's the best path forward.

Gordon:  That's a great point. I think it's really exciting, a lot of this stuff that's happening in cloud, new capabilities, all of this API‑based, modular computing, self‑organizing, what have you. I think it's all great, mind you. I'm certainly not suggesting otherwise. But we do, at some level, need to balance all the new application style enthusiasm, by the fact that, in most cases, we have run the business applications that we probably don't want to just abandon or say they're old legacy apps that we really can't do anything with and not concentrate on them any longer. Those are the apps that are running the business, and, in many cases, there are things that open‑hybrid cloud management can do to make them more flexible, as well.

Kurt:  Yeah, I think it becomes a portfolio‑view approach that organizations need to look at their whole suite of applications, determine what the best fit is from an environmental standpoint. They can't throw out legacy apps. The apps are being used by folks in the business to do business functions. I think creating some framework of being able to evaluate what's the business objective? What are the constraints? What are the architectural options that make sense across a whole range of physical, virtual and cloud environments? I think it's going to be a mixed model. I don't think any one of those is going to prevail or any one of those totally go away.

Gordon:  You go into quite a bit of methodology associated with that in your last book.

Kurt:  That's right. The plug for the book, the "Visible Ops Private Cloud: From Virtualization to Private Cloud in Four Practical Steps," was based on interviews of about 30 IT organizations that had deployed some form of private cloud. We were really trying to capture some of the lessons learned. What do you know now that you wish you had known at the beginning of the project? What were really the key success factors? Then, put those into a step‑wise methodology that we think any organization should at least consider some of the factors there as they develop their cloud strategy.

Gordon:  Great. Well, thanks very much, Kurt. Been good talking to you.

Kurt:  All right. Thanks, Gordon. I enjoyed it.

Gordon:  Bye everyone.

Tuesday, May 15, 2012

Links for 05-15-2012

Podcast: Complex adaptive systems and APIs with James Urquhart of enStratus


Cloud computing requires a mindset that approaches system architecture as a much more distributed, heterogenerous, and even self-organizing entity than was the historic norm in IT. VP of Product Strategy and GigaOm blogger James Urquhart shares his thoughts on the topic as he discusses:
  • Complex adaptive systems
  • What high availability means in the cloud
  • The role of standards
Listen to MP3 (0:14:22)
Listen to OGG (0:14:22)

Transcript:


Gordon Haff:  Hi everyone. This is Gordon Haff, Cloud Evangelist with Red Hat. I'm here at the Open Cloud Conference in the Bay Area. I'm sitting here with James Urquhart, who's the VP of product strategy for enStratus. Hi, James.

James Urquhart:  How are you, Gordon? Good to see you.

Gordon:  We've known each other for a while. You've had blogs in a number of places that I've also written on and currently on GigaOm, right?

James:  Yeah. I'm a regular contributor to the GigaOm cloud section. I should be blogging more often than I do. You'll see me about every two to three weeks on GigaOm.

Gordon:  I have the same problem, James, getting stuff written on a regular basis.

We've had some really interesting conversations about how the cloud is changing systems architecture. In fact, you had some really interesting thoughts about how to think about architectures with the cloud.
James:  For a long time I've had an interest in the subject of complex adaptive systems. There's an entire science around a world in which there are many, many different individual agents that each have their own behavioral decision making process, whether that's DNA or whether it’s the economic space with buyers and sellers. And then, they interact in very arbitrary ways over a very large scale creating a system that ends up having its own emergent behavior as a system that comes with no central control of that behavior. That's just the way things work out as these agents work out.

If you look at cloud computing, what we're really beginning to do in a very large way is to step out of the silo world into much more of a heavily integrated world where the applications, the infrastructure, the services being delivered are all agents that are being very often decided by different people. A great example of that is I might have multiple agents as an enterprise running on Heroku, which in turn is running on Amazon Web Services.

And so, the behaviors of the systems are decided very independently. The subcomponents of the system are decided very independently. What you're beginning to see is that complex adaptive systems behavior slowly but surely starting to show up in IT in general, in computing in general, on the Internet in general. In part because cloud computing is an enabler of that.

What that means is, if you embrace and understand the complex systems piece of the puzzle, what you're really going to begin to see is a way to understand and to embrace the complexity of the system and to understand how to do your little pieces to make sure that your agents that you care the most about thrive and survive in that system.

I think that's really, to me, the critical shift in thinking. From trying to figure out how to build something that just works and will never break to building something that adapts to the environment and constantly is able to thrive within a changing environment.

Gordon:  I think one way that's sort of an interesting way to think about that at a conference a couple weeks ago, someone got up, up and asked "Is there a way to get five nines reliability in the cloud?" And of course, you’re coming from among other things working in a large systems in the past...The traditional thinking there was that you had some sort of failover clustering capability among large Unix systems or among large mainframes whereas from a transactional perspective, stock transactions, whatnot, you got however many minutes of downtime a year equated to five nines.

And really, though, that's not the right question to ask in the cloud, is it?

James:  No. In fact, there's a really, really interesting part of complex adaptive system science that's really just starting to come out now and be explored by academia in a large way. Now, there's actually a tradeoff between stability and resiliency. If you attempt to say "I want five nines by knowing exactly what my stack is and exactly how that stack works and that nothing is going to fail in that stack," or "If something fails, I know exactly how something else will come in and replace it. But I'm going to make sure that this thing is as stable as possible." The problem you have is there's a number of things that can come in from the environment that shift the ground underneath your designs so much that there's no way that your design can in fact adapt to that change and it will fail as a whole.

A resilient architecture is one much more where you say, "Look. The individual components each have to be able to not only survive the environment as it stands, but the individual components have to be designed in a way that as a horizontally scalable system, as a group of agents working together, that as changes happen in the environment that the system somehow keeps going. The subsystem somehow finds a way to at least meet a minimum set of capability that keeps the system going."

If you look at the way Amazon's designed, if you look at the way that Netflix is designed, this is exactly what they do. That front page of Amazon's not an application that’s made up of a whole bunch of pieces that are all designed to be stable. It's made up of a whole bunch of things where there's a whole bunch of failover and a whole bunch of different ways that data can be gathered.

So go to a cache. If the cache is gone, you go to the core data source. If that data source is gone, there's this other data source that will give you kind of a remotely good picture that you can adapt to. If that data source is gone, then you can say, "Well, I'm just not going to display that element of the page."
But the home page, that Amazon purchase page, is always there. When was the last time you went to Amazon and it was gone completely? That kind of resiliency...Right? Things fail all the time in Amazon, but that resiliency of the overall system gets you the appearance of five nines plus.
I think that that's the beginning shift of the mentality to say, "Rather than focusing on the component and making the component as stable as possible, focus on the relationships between components and how components work together and how can you build as much resiliency into the different relationships and the way things work together so that the system as a whole is in fact quite available, and quite resilient.”

Gordon:  This sort of idea that we're just going to have these utterly standardized APIs that work together in lockstep and communicate with that way. That's really not the future. What we're really talking about architecting for this very heterogeneous environment where you need to sometimes translate from one thing to another, connect to things in a loosely coupled way.

James:  I don't think standard APIs are the problem. I think the way to look at it though is, you're trying to find the patterns and you're trying to make sure that you can build to the patterns that work and to adapt and evolve those patterns over time. But I think there's a place for standard APIs. I think there's a place...Frankly, provisioning a server as an action, there's very little highly differentiated ways that you can provision a server. I think it's very fair to say that we're getting to a point for a Linux system working on an X86 environment, there can be a very, very standard way of doing that basic task.

But that's not the application. That's not the thing that solves a business problem up above.
I believe that there are standards in places that you can come to, but the idea that there's one stack that solves the problem is...And I don't think I've heard anybody really argue that it's all going to be this one big stack and everybody's going to move to this or they're non standard.

I think what you have to realize is that there are different components in the different stacks, that they give you different value. I think it's fine to talk about open standards for interfaces and then for formats, but I think when you go farther than that, when you try to say the stack is locked down, you have to do it this way with these sets of components. I think that that's the point that you break the model. I think that's when the market says "That's a broken model," and they do something different.

A great example, really quickly, about that is just when ITIL took off and companies started identifying ITIL and really, really being hip on it. DevOps shows up. Because ITIL was broken for some aspects of what the business wanted to do, DevOps is much more flexible in terms of the agility when you need agility. So, in fact, it is disruptive to what we thought was the commodity way of doing IT. It's always going to be that way.

Gordon:  I think, really, if you look at the history of IT, you've got big monolithic approaches really have not done as well as more nimble, more modular approaches.

James:  And that's exactly true. I think...There's a gentleman by the name of Simon Wardley who has great writing on this, where he talks about there are spectrums of business activities and there are times when you need to be highly, highly agile and there are times when you need to be locked down and to very closely control change and control adjustments. But what happens is you go through that cycle and get to the end of the cycle, to where things are little bit more like that. That enables a whole new set of innovation on top of that which, in the end, may trickle down and say, "Yep, we need to rethink the way that we're doing X."

It's being prepared and being able to understand that that constant churn is a fact of life. It's something you need to develop your processes with that concept in mind. And that the patterns and the toolsets and the infrastructure that we build out for the cloud is going to have to take that complex systems approach in mind, as well, and really begin to embrace this concept of focusing on the relationships between things more than focusing on the components themselves.

Gordon:  And we do certainly seem to be shifting to an API driven world in a lot of ways. I guess a lot of people tend to think in terms of the Amazon APIs, and the Flickr APIs, and in many cases the more consumer oriented services. But more and more businesses, credit card processors, banks, what have you are starting to expose APIs, if not for general public use then the use of their partners.

James:  Yeah, and I think what's really, really fascinating about that is why those APIs are being exposed. If it's of a surface that you give it some data and instruction and it returns something back to you that's of value. You're basically providing that service through an API instead of through human methods or whatever it might have been before. In other areas where you're saying, "Well, the API is really about how you consume another resource downstream." The problem that you have is the API isn't enough. And so, I think what you're going to see is giant success in terms of exposing business capability through APIs. I know of companies out there like a giant construction company that has this phenomenal API layer over all of their backend systems. They're writing mobile apps that will blow your mind at a rate that, in turn, would blow your mind as well because they just call to a standard REST kind of syntax and structure.

That stuff really works really, really well. But when it comes to saying, "Hey, I want to provision a service so here's my API and that's going to work all the time." That's not true today and it may never be fully true. It may be more true than it is today, but I think you have to understand that there's a lot more that has to be standardized than just APIs to get to that point. That's going to take more work and that's going to take more effort.

But projects like OpenStack, like CloudStack, like Eucalyptus, they have a great opportunity to, in fact, create mini ecosystems or even large ecosystems out there where that's more true than it would be for the cloud as a whole.

It excites me that the API story is taking off because for developers it's powerful. But I also...It's temporary when I say that. I'm sort of saying it's not enough to say APIs. We need common formats and additional common interfaces. That work still needs to be done.

Gordon:  Yeah, you really need hybrid cloud management to take care of some of that really at a level that's below what developers really ought to be worrying about.

James:  Yeah, and that's why...This is the reason why enStratus is focused on the application level of operations. We're about application operations in the cloud. How do you consume cloud services to deliver application capabilities? We're largely focused on infrastructures and service today, but that's obviously an evolving picture. I think when you look at the problem of saying, "I'm going to...My tools for running my application are in a cloud service," that's very limiting in terms of how you do things.

Having tools that say, "Let's step back and abstract how we want to operate our applications in general," and then apply that to the different clouds we might want to consume in the way that we operate. Make sure we're applying consistent governance. Make sure we're applying consistent automation to the approach. Make sure we're applying that in a very independent way so not only are you independent from the clouds that you can choose but also in terms of the tools that you apply to operate.

So with DevOps tools, you want to use Chef/Puppet. What management tools do you want to use? Monitoring tools, those kinds of things, do you want to use in the environment? That's really what the enterprise needs, is that ability to begin to abstract application operations and begin to incorporate the things that they need to in that way.

Looking at application operations as separate from infrastructure and services operations. The delivery of a cloud service to the end customer. So building your private cloud is not an application operation problem. It's a service operations problem. Consuming that private cloud is an application operations problem.

Gordon:  Great. Thanks very much, James. Anything else to add?

James:  No. Congratulations to Red Hat on their wonderful launch and with their OpenShift stuff. I'm very excited to see what's going on in that PaaS side of the market. I think that's a really exciting space to watch. And I'm very happy to have been here with you today and have a chance to talk to you.

Gordon:  Great. Thanks, James.

Friday, May 11, 2012

Links for 05-11-2012

Thursday, May 10, 2012

PaaS Infographic

Lots of growth forecast. I'd argue that the relatively slow pickup to date has been a function of the fact that many first-generation PaaS platforms have been specific to a single provider. More open approaches, such as that taken by Red Hat's OpenShift, are provider-independent--which greatly reduces the possibility of vendor lock-in.

Links for 05-10-2012

Monday, May 07, 2012

Links for 05-07-2012

Thursday, May 03, 2012

Standard APIs: There's no substitute for open

[As I touch on intellectual property issues herein, I'd like to remind everyone that this is my personal blog and should not be in any way taken as official Red Hat positions or statements, nor presented as such.]

Just because something is widely used doesn't make it a standard--de facto, de jure, or otherwise--in the sense that anyone can use and build implementations of that standard without restriction. Indeed, as we shall see, even standards that are "blessed" by powers-that-be are not always fully open in the ways that I have outlined previously.

Standardization has been around for a long time. The IEEE tells us that:

Based on relics found, standardization can be traced back to the ancient civilizations of Babylon and early Egypt. The earliest standards were the physical standards for weights and measures. As trade and commerce developed, written documents evolved that set mutually agreed upon standards for products and services, such as agriculture, ships, buildings and weapons. Initially, these standards were part of a single contract between supplier and purchaser. Later, the same standards came to be used across a range of transactions forming the basis for modern standardization.

A lot of this early standardization pretty much came down to custom. The convoluted history of why we drive on one side of the road in a given country is instructive. (Though each country's conventions are now enshrined in The Geneva Convention on Road Traffic.)

Gauge

The history of the shipping container, as detailed in Marc Levinson's The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, offers another, fairly typical, historical example. Incompatible container sizes and corner fittings required different equipment to load and unload and otherwise inhibited the development of a complete logistics system. The standardization that happened around 1970 made possible the global shipping industry as we know it today--and all that implies. The evolution of standardized railroad gauges is similarly convoluted. The development of many early-on computer formats and protocols was similarly darwinian.

It's tempting to take this past as prologue and conclude that similar processes will continue to play out as we move to new styles of computing in which different forms of interoperability assume greater importance. For example, published application programming interfaces (API) are at the heart of how modular software communicates in a Web services-centric world. One set of APIs wins and evolves. Another set of APIs becomes a favorite of some particular language community. Still another doesn't gain much traction and eventually withers and dies. It sounds like a familiar pattern. 

But there's an important different. In today's software world, it's impossible to ignore intellectual property (IP) matters whether copyright, patent, trademark, or something else. An API isn't a rail gauge--though perhaps someone today would try to patent that too.

As a result, tempting as it might be to adopt some API or other software construct because it's putatively a "de facto" standard, which is mostly a fancy way of saying that it's popular, that may not be such a good idea.

RedMonk's Stephen O'Grady offers some typically smart commentary on why:

it’s worth noting that many large entities are already behaving as if APIs are in fact copyrightable. The most obvious indication of this is Amazon. Most large vendors we have spoken with consider Amazon’s APIs a non-starter, given the legal uncertainties regarding the intellectual property involved. Vendors may in certain cases be willing to outsource that risk to a smaller third party – particularly one that’s explicitly licensed like a Eucalyptus [coverage]. But in general the low risk strategy for them has been to assume that Amazon would or could leverage their intellectual property rights – copyright or otherwise – around the APIs in question, and to avoid them as a result. Amazon, while having declined to assert itself directly on this basis, has also done nothing to discourage the perception that it has strict control of usage of its APIs. In doing so, it has effectively turned licensed access to the APIs into a negotiable asset, presumably an outcome that advocates of copyrightable APIs would like to see made common.

In fact, lack of openness can even extend to standards that have gained some degree of governmental or quasi-governmental approval--which is, after all, a political process. Last decade's fierce battle over Microsoft's submittal of its OOXML document format as a standard to ECMA and ISO is perhaps the most visible example. The details of this particular fight are complicated, but, in Kurt Cagle's words, "The central crux of the [then-]current debate is, and should be, whether Microsoft’s OOXML does in fact represent a standard that is conceivably implementable by anyone outside of Microsoft."

Issues of the conditions that should be satisfied in order for a vendor's preferred approach/format/etc. to become a "blessed" standard continue to reverberate. The latest round is about RAND (Reasonable-and-Non-Discriminatory) licensing and whether that can take the place of truly open implementations. It's essentially an attempt to slip proprietary approaches requiring a patent license into situations, such as government procurements, that require open standards. 

But, as Simon Phipps, a Director of the Open Source Initiative and of the UK’s Open Rights Group puts it:

The presence of RAND terms at best chills developer enthusiasm and at worst inhibits engagement, as for example it did in the case of Sender ID at IETF. As Välimäki and Oksanen say, RAND policy allows patent holders to decide whether they want to discourage the use of open source. Leaving that capability in the hands of some (usually well-resourced) suppliers seems unwise.

At one level, the takeaway here might be "it's complicated." And it is. But another takeaway is pretty simple. You can dress up proprietary standards in various ways and with various terms. And such standards have a place in the IT ecosystem. But they're not open, whatever you call them.