Wednesday, October 31, 2012

Links for 10-31-2012

How application virtualization was reborn

Server virtualization has become a familiar fixture of the IT landscape and an important foundation for cloud computing.

But virtualization is also relevant to client devices, such as PCs. To a greater degree than on servers, client virtualization takes many forms, reflecting forms of abstraction and management that take place in many different places. Client virtualization includes well-established ways of separating the interaction with an application from the application itself, the leveraging of server virtualization to deliver complete desktops over the network (Virtual Desktop Infrastructure--VDI), and the use of hypervisors on the clients themselves. In short, client virtualization covers a lot of ground, but it’s all about delivering applications to users and managing those applications on client devices.

2012-ipadmini-home-hero

It’s essentially a tool to deal with installing, updating, and securing software on distributed “stateful” clients—which is to say, devices that store a unique pattern of bits locally. If a stateless device like a terminal breaks, you can just unplug it and swap in a new one. Not so with a PC. At a minimum, you need to restore the local pattern of bits from a backup.

However, client virtualization (in any of its forms) has never truly gone mainstream, whether it was because it often cost more than advertised or just didn’t work all that well. It’s mostly played in relative niches where some particular benefit—such as centralized security—is an overriding concern. These can be important markets. We see increased interest in VDI at government agencies, for instance. But we're not talking about the typical corporate desktop or consumer. 

Furthermore, today, we access more and more applications through browsers rather than applications installed on PCs. This effectively makes PCs more like stateless thin clients. And, therefore, it makes client virtualization something of a solution for yesterday’s problems rather than today’s.

Except for one thing.

Client virtualization, in its application virtualization guise, has in fact become prevalent. Just go to an Android or iOS app store.

Application virtualization has been around for a long time. Arguably, its roots go back to WinFrame, a multi-user version of Microsoft Windows NT that Citrix introduced in 1995. It was, in large part, a response to the rise of the PC, which replaced “dumb terminals” acting as displays and keyboards for applications running in a data center with more intelligent and independent devices. Historically, application virtualization (before it was called that) focused on what can be thought of as presentation-layer virtualization—separating the display of an application from where it ran. It was mostly used to provide standardized and centralized access to corporate applications.

As laptops became more common, application virtualization changed as well. It became a way to stream applications down to the client and enable them to run even when the client was no longer connected to the network. Application virtualization thus became something of a packaging and distribution technology. One such company working on this evolution of application virtualization was Softricity, subsequently purchased by Microsoft in 2006.

I was reminded of Softricity earlier this year when I spoke with David Greschler, one of its co-founders, at a cloud computing event. He’d moved on from Microsoft to PaperShare but we got to talking about how the market for application virtualization, as initially conceived, had (mostly not) developed. And that’s when he observed the functional relationship between an app store and application virtualization. And how application virtualization had, in a sense, gone mainstream as part of mobile device ecosystems.

If you think about it, the app store model is not the necessary and inevitable way to deliver applications to smartphones, tablets, and other client devices.

In fact, it runs rather counter to the prevailing pattern on PCs—regardless of operating system—towards installing fewer unique applications and running more Web applications through the browser. Google even debuted Crome OS, designed to work exclusively with Web applications, to great fanfare. As connecting to networks in more places with better performance improves and as standards, such as HTML5, evolve to better handle unconnected situations, it’s a reasonable expectation that this trend will continue.

But the reality of Chrome OS has been that, after early-on geek excitement, it’s so far pretty much hit the ground with a resounding thud. At least as of 2012, it’s one thing to say that we install fewer apps on our PCs. It’s another thing to use a PC that can’t install any apps. Full stop.

What’s more, it’s worth thinking about why we might prefer to run applications through a browser rather than natively.
It’s not so much that it lets developers write one application and run it on pretty much anything that comes with a browser. As users, we don’t care about making life easier for developers except insofar as it means we have more applications to use and play with. And, especially given that client devices have coalesced around a modest number of ecosystems, developers have mostly accepted that they just have to deal with that (relatively limited) diversity.

Nor is it really that we’d like to be able to use smaller, lighter, and thinner clients. Oh, we do want those things—at least up to a point. But they’re usually not the limiting factor in being able to run applications locally and natively. We don't want to make clients too limited anyway; computer cycles and storage tend to be cheaper on the client than on the server.

No, the main thing that we have against native applications on a client is their “care and feeding.” The need to install updates from all sorts of different sources and dealing with the problems if upgrades don’t go as planned. The observation that a PC’s software sometimes needs to be refreshed from the ground-up to deal with accumulating “bit rot” as added applications and services slow things down over time.

And that’s where centralized stores for packaged applications come in. Such stores don’t eliminate software bugs, of course. Nor do they eliminate applications that get broken through a new upgrade—one need only peruse the reviews in the Apple App Store to find numerous examples. However, relative to PCs, keeping smartphones and tablets up-to-date and backed up is a much easier, more intuitive, and less error-prone process.

Of course, for a vendor like Apple that wants to control the end-to-end user experience, an app store has the additional advantage of maintaining full control of the customer relationship. But the dichotomy between an open Web and a centralized app store isn’t just an Apple story. App stores have widely become the default model for delivering software to new types of client devices and certainly the primary path for selling that software.

The Web apps versus native apps (and, by implication, app stores) debate will be an ongoing one. And it doesn’t lend itself to answers that are simple either in terms of technology or in terms of device and developer ecosystems.
Witness the September 2012 dustup over comments made by Facebook CEO Mark Zuckerberg that appeared to diss his company’s HTML5 Web app, calling it "one of the biggest mistakes if not the biggest strategic mistake that we made."

However, as CNET’s Stephen Shankland wrote at the time: “Those are powerfully damning words, and many developers will likely take them to heart given Facebook's cred in the programming world. But there are subtleties here -- not an easy thing for those who see the world in black and white to grasp, to be sure, but real nonetheless. Zuckerberg himself offered a huge pro-HTML5 caveat in the middle of his statement.”

It’s often observed that new concepts in technology are rarely truly new. Instead, they’re updates or reimaginings of past ideas both successful and not. This observation can certainly be overstated, but there's a lot of truth to it. And here we see it again--with application virtualization and the app store.

Tuesday, October 30, 2012

Bass Harbor Light

Bass Harbor Light by ghaff
Bass Harbor Light, a photo by ghaff on Flickr.

Another nice summer in Acadia National Park (on a couple different trips).

Links for 10-30-2012

Saturday, October 27, 2012

Head of the Charles 2012

Head of the Charles 2012 by ghaff
Head of the Charles 2012, a photo by ghaff on Flickr.

The Head of the Charles last weekend was my first really heavy-duty use of the Sigma 150-500mm lens that replaced my old Sigma tele after its AF broke. When I sent that lens in for repair, I was offered a trade-in at a pretty good rate that it seemed silly to not take advantage of, even though it's a category of lens I don't use a huge amount.

I find the focus on the new lens is a lot more responsive than the old one (as well as having a top end of 500mm rather than 400mm). However, removing the lens limits also exposes the AF limitations of my EOS 5D a lot more. So it's got me leaning towards an upgrade to a 5D Mark iii rather than the not-yet-available 6D as I had been leaning towards previously.

Thursday, October 25, 2012

Links for 10-25-2012

Wednesday, October 17, 2012

Links for 10-17-2012

Monday, October 15, 2012

Links for 10-15-2012

Thursday, October 11, 2012

The inevitability of cloud computing

Thanks to a pointer from Joe McKendrick over at Forbes, this morning I had a chance to read a study looking at 2012 cloud adoption patterns (mostly at larger organizations) put together by Navint Partners. The bottom line? "While there’s still much debate over the Cloud’s security, the industry consensus is one of inevitability."

The study looked at both private and public cloud deployments although it's a bit hard to tease apart conclusions as to when they relate to on-premise versus hosted offerings--or a hybrid combination of the two. I've come to somewhat wistfully think back to a 2009 CNET Blog Network piece I wrote about cloud terminology and sorta wish that we, as an industry, had come up with a better way to unpack the different concepts and approaches that come together under the "cloud computing" umbrella. But I digress.

Among the study's findings was that 80 percent of respondents recognized cloud technology as giving their organizations a competitive advantage.

The report goes on to note that:

Cloud’s scalable nature and modern approach to data and infrastructure pushes organizations into a more competitive position. While most CIOs recognize the Cloud has existed in some form for a decade, SaaS solutions are, in many industries, still novel. [Navint's Robert] Summers explained that while larger corporations have been using private clouds for a while, small‐to‐mid sized businesses can dramatically scale their operations and outpace competitors if some processes are relegated to a SaaS or Cloud model.

This is consistent with what we've been seeing at Red Hat with early cloud deployments. The ultimate goal from a CxO's perspective is to use cloud computing in order to make technology a competitive differentiator rather than a keep-the-lights-on cost. This goal only becomes more important ads technology is increasingly core to how more and more businesses operate. 

What form cloud takes will depend on the company. For smaller organizations, SaaS will likely play an outsized role.

But, as noted by Gartner's Eric Knipp in a recent blog post "While I don’t debate that 'the business' will have more 'packages' to choose from (loosely referring to packages as both traditional deployed solutions and cloud-sourced SaaS), I also believe that enterprises will be developing more applications themselves than ever before." He goes on to describe why he believes that a golden age of enterprise application development is upon up, partly because of the rise of Platform-as-a-Service. I'll discuss Knipp's thesis in more detail in a future post.

On the downside, the study also found that:

survey respondents still ranked security as the top concern (above compliance and integrity), and affirmed data security and privacy as the number one barrier to both public and private cloud adoption. Despite highly advanced security and fraud countermeasures employed by Cloud vendors, CIOs and other executives regard security guarantees and redundancy policies with guarded pessimism. Practically, this fear has had the effect that many companies have yet to move “mission‐critical” applications to the cloud.

 I guess I'm not really surprised by this finding either. One wonders to what degree this is about perceptions, rather than reality. But, at some level, the distinction isn't that important if it's what potential customers believe.

The good news from my perspective is that I see a lot of good work happening out in the industry to bring structure to security (and compliance/governance/regulatory/etc.) discussions and really bringing together the tools to have discussions that transcend naive safe/not-safe dichotomies. I've got an upcoming piece that looks into the good stuff the Cloud Security Alliance (CSA) is doing in this space.

Finally, it's clear that cloud computing isn't going to be about private or public.

36% of survey participants believe that budget dollars for public cloud computing will increase by as
much as twenty percent by 2014, and 46% expect budgets for private cloud computing to jump by more than twenty percent over the same period.  

Which is why we're focused on open and hybrid at Red Hat.

Links for 10-11-2012