- Are We Witnessing the Decline of Ubuntu? - Datamation
- Edward Tufte forum: PowerPoint Does Rocket Science--and Better Techniques for Technical Reports
- What Clayton Christensen Got Wrong in his Theory of Low-End Disruption
- EdX, MIT, and online certificates: How non-degree certificates are disrupting traditional university degree programs. - Slate Magazine - I suspect we're seeing the future of MOOCs here.
- PaaS Chat: Talking MBaaS With Kinvey - YouTube - RT @Kinvey: Check it out! RT @krishnan Recording of Google Hangout I did w/ @sravish of @Kinvey & @ghaff of @redhat talking MBaaS
- IDC Vendor Profile - Red Hat as a Storage Solutions Provider - RT @RedHatStorage: Find out why @IDC thinks Red Hat #Storage is an "unstoppable force" in the industry. Check out our vendor profile:
- Red Hat and Docker Collaborate | Docker Blog - RT @golubbe: Excited to announce @docker collaboration w/ @RedHat on Fedora, Openshift, & more #paas #FOSS
- Twitter / ghaff: Lafayette Cemetary. #1 ... - Lafayette Cemetary. #1
- Twitter / pythondj: not a pretty picture - github ... - RT @pythondj: not a pretty picture - github is down!
Wednesday, September 25, 2013
Links for 09-25-2013
Thursday, September 19, 2013
What are containers and how did they come about?
Today's announced collaboration between Red Hat and dotCloud, the company behind Docker, is exciting for a lot of reasons. As the release notes: "Docker and OpenShift currently leverage the same building blocks to implement containers, such as Linux kernel namespaces and resource management with Control Groups (cGroups). Red Hat Enterprise Linux Gears in OpenShift use Security-Enhanced Linux (SELinux) access control policies to provide secure multi-tenancy and reduce the risk of malicious applications or kernel exploits."
Among other areas of collaboration, we'll be working with dotCloud "to integrate Docker with OpenShift’s cartridge model for application orchestration. This integration will combine the power of Docker containers with OpenShift's ability to describe and manage multi-container applications, enabling customers to build more sophisticated applications with enhanced portability."
(See also the blog post from Docker here.)
But what are containers exactly? I'm down at LinuxCon/CloudOpen in New Orleans this week and I've seen a lot of interest in the various sessions that touch on containers. I've also see a fair bit of confusion and vagueness over what they are and what function they serve. In a way, I find this a bit surprising as the concept--and products based on that concept--has been around for almost a decade and progenitors go back even further. But I think it reflects just how thoroughly hypervisor-based virtualization has come to dominate discussions about partitioning physical systems into smaller chunks. Today is a far cry from the mid-2000s when I was writing research notes as an industry analyst about the "partitioning bazaar" which saw the offering of all manner of both hardware-based, software-based, and hybrid techniques--many implemented on large Unix servers.
See: The Partitioning Bazaar (2002), New Containments for New Times (2005), and The Server Virtualization Bazaar, Circa 2007. Some of this blog post is adapted from material in those earlier notes. The original notes get into more of the historical background and context for those who are interested.
Hypervisor-based Virtualization
First let's consider hypervisor-based virtualization, aka hardware virtualization. Feel free to skip or skim this part. But I think the context will be useful for some.
Virtual Machines (VMs) are software abstractions, a way to fool operating systems and their applications into thinking that they have access to a real (i.e. physical) server when, in fact, they have access to only a virtualized portion of one. Each VM then has its own independent OS and applications, and is not even aware of any other VMs that may be running on the same box, other than through the usual network interactions that systems have (and certain other communication mechanisms that have evolved over time). Thus, the operating systems and applications from VMs are isolated from each other in (almost) the same manner as if they were running on separate physical servers. They’re created by a virtual machine monitor (VMM), often called a “hypervisor,” that sits on top of the hardware, where it creates and manages one or more VMs sitting on top of the hypervisor, and interfaces with the underlying hardware. (The hypervisor thereby provides many of the services provided by an operating system and in the case of, for example, KVM that operating system can even be a general purpose OS like Linux. For the purposes of this discussion, we're only considering native virtualization as opposed to host-based virtualization such as provided by VirtualBox which isn't really relevant in the server space.)
The result is multiple independent operating system instances running on a single physical server, each of which communicate with the rest of the world, including other instances on the same physical server, through a hypervisor. Historically, the reason server virtualization was so interesting was that it enabled server consolidation--the amalgamation of several underutilized physical servers into one virtualized one while keeping workloads isolated from each other. This last point was important because Windows workloads in particular often couldn't be just crammed together into a single operating system instance because of DLL hell, version dependencies, and other issues. The VM approach was also just a good fit for enterprises which often maintained lots of different operating systems an versions thereof. VMs let them largely keep on doing things the way they were used to--just with virtual servers instead of physical ones.
Over time, server virtualization also introduced features such as live migration which allowed moving running instances from machine to machine as well of a variety of other features going beyond consolidation. These features too generally reflected enterprise needs and stateful "system of record" workloads.
Where did containers come from?
Containers build from the basic *nix process model that forms the basis for separation. Although a process is not truly an independent environment, it does provide basic isolation and consistent interfaces. For example, each process has its own identity and security attributes, address space, copies of registers, and independent references to common system resources. These various features standardize communications between processes and help reduce the degree to which wayward processes and applications can affect the system as a whole.
*nix also builds in some basic resource management at the process level—including priority-based scheduling, augmented by things like the intrinsic function ulimit, which can be used to set maximum resources such as CPU cycles, file descriptors, and locked memory used by a process and its descendants. More recently, Control Groups (Cgroups) have significantly extended the resource management built into Linux--providing controls over CPU, memory, disk, and I/O use such as that often offered through add-on management products in the past.
The first example of isolating resource groups from each other probably dates to 1999 when the FreeBSD jail(2) function reused the chroot implementation, but blocked off the normal routes to escape chroot confinement. However, two different implementations garnered a fair bit of attention in the 2000s: one from SWsoft's Virtuozzo (the company is now called Parallels) and another in Sun's Solaris. The Solaris 10 implementation is probably what most popularized the "containers" term, which was their marketing name for OS virtualization. (Their technical docs used "zones" for the same thing.) IBM also introduced containers in AIX which were unique in that they allowed for moving running containers between systems.
Despite some interest in containers during a certain period, though, they never had a broad-based impact. SWsoft came with a history of success in the hosting space where their flavor of containers (Virtual Private Servers) had a fair degree of success. That's because (most hosting providers are typically highly cost sensitive and, as we'll see, containers enable very high density relative to hypervisor-style virtualization (among other differences). However, the push behind containers never moved them into new markets to any appreciable degree. In part, this was because of technical match with enterprise requirements. It was probably as much because of an industry tendency to standardize on particular approaches--and that ended up being VMware for server consolidation during the 2000s.
What are containers?
Before getting into what's happening today, let's talk about what containers are from a technical perspective. I've tried to make this description relatively generic as opposed to getting into specific implementations such as OpenShift's.
Like partitions, a container presents the appearance of being a separate and independent OS image—a full system, really. But, like the workload groups that containers extend, there’s only one actual copy of an operating system running on a physical server. Are containers lightweight partitions or reinforced workload groups? That’s really a matter of definition and interpretation, because they have characteristics of each. It may help to think of them as “enhanced resource partitions” that effectively bridge the two categories.
Containers virtualize an OS; the applications running in each container believe that they have full, unshared access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Containers can be very low-overhead. Because they run atop a single copy of the operating system, they consume very few system resources such as memory and CPU cycles. In particular, they require far fewer resources than workload management approaches that require a full OS copy for each isolated instance.
Containers tend to have lower management overhead, given that there’s but a single OS to be patched and kept current with security and bug fixes. Once a set of patches is applied and the system restarted, all containers automatically and immediately benefit. With other forms of partitioning such as hypervisor-based virtualization, each OS instance needs to be patched and updated separately, just as they would if they were on independent, physical servers. This is a critical benefit in hosting environments but it has often been seen as a negative in much more heterogeneous enterprise environments.
What containers don't do is provide much if any additional fault isolation for problems arising outside the process or group of processes being contained. If the operating system or underlying hardware goes, so go its containers—that is, every container running on the system. However, it's worth noting that, over the past decade, an enormous amount of work has gone into hardening the Linux kernel and its various subsystems. Furthermore, SELinux can be used to provide additional security isolation between processes.
So, here are a few statements about containers that are (generally) true:
- The containers on a single physical server (or virtual machine) run on a single OS kernel. The degree to which the contents of a given container can be customized is somewhat implementation dependent.
- As a result, many patches will apply across the containers associated with an OS instance.
- Management of resource allocation between containers is fast and low overhead because it's done by a single kernel managing its own process threads.
- Similarly the creation (and destruction) of containers is faster and lower overhead than booting a virtual machine.
- Today, running containers cannot generally be moved from one running system to another.
Why today's interest?
In a nutshell: Because cloud computing--Platform-as-a-Service (PaaS) in particular--more closely resembles hosting providers than traditional enterprise IT.
A lot has to do with the nature of cloud workloads. I explored the differences between traditional "systems of record" and cloud-style "systems of engagement" in a new whitepaper. However, in brief, cloud-style workloads tend towards scale-out, stateless, and loosely coupled. They also tend to run on more homogeneous environments (alongside existing applications under hybrid cloud management) and use languages (Java, Python, Ruby, etc.) that are largely abstracted from the underlying operating system.
Some of the implications of these characteristics is that you don't generally need to protect the state of individual instances (using clustering or live migration). Nor do you typically have or want to have a highly disparate set of underlying OS images given that makes management harder. You also tend to have a large number of smaller and shorter-lived application instances. These are all a good match for containers.
PaaS amplifies all this in that it explicitly abstracts away the underlying infrastructure and enables the rapid creation and deployment of applications with auto-scaling. This is a great match for containers, both because of the high densities and the rapid resource reallocation they enable (and, indeed, require).
In short, it's no coincidence that containers have re-entered the conversation so strongly. They're a match for cloud broadly and PaaS in particular. This isn't to say that they're going to replace virtual machines. I'd argue rather that they're a great complement that needn't (and probably shouldn't) try to replicate the VM capabilities that were designed with different use cases in mind.
Tuesday, September 17, 2013
Links for 09-17-2013
- How Facebook stands to gain by sharing its trade secrets | Internet & Media - CNET News
- The 20 Smartest Things Jeff Bezos Has Ever Said (AMZN)
- A Brief History of Open-Source Code [Infographic] | Kinvey Backend as a Service Blog - RT @sravish: A Brief History of Open-Source [Infographic]. @krishnan, @timbray, @Scobleizer, @benkepes @scottgu will enjoy this ->
- Why Big Data Is More About Variety and Less About Volume | Big Data Journal - RT @mccrory: Why Big Data Is More About Variety and Less About Volume << Interesting
- Twitter / ghaff: Pat O'Briens with Linux ... - Pat O'Briens with Linux Foundation et al
- Registration Launch - RT @RedHatCloud: Join @matthicksj & Bill DeCoste for “How to design code for the cloud and enjoy the benefits of PaaS”. Register now.
- Kinvey Joins Red Hat’s OpenShift Platform as a Service Ecosystem | Kinvey Backend as a Service Blog - RT @sravish: I am really excited that @Kinvey Joins Red Hat’s @OpenShift Platform as a Service Ecosystem cc / @krishnan ->
- Who Made That Built-In Eraser? - NYTimes.com
- Ranked: Aaron Sorkin, from Worst to Best | Nerve.com - I'd move a few things around but a pretty good list overall.
- Red Hat releases new cloud-friendly Red Hat Storage | ZDNet - RT @RedHatStorage: "Red Hat releases new cloud-friendly Red Hat #Storage" - Great coverage by @sjvn in ZDNet:
- The PR 50 2013 - Business Insider - RT @heathercfitz: Congrats @amber_rowland Well deserved ack for one of the best PR people I know. << +100
- SAN Stalwarts and Wistful Thinking
- Treadmill Desk Review: LifeSpan 1200DT / DT-5 – GearMonk
- The Man Who Would Build a Computer the Size of the Entire Internet | Wired Enterprise | Wired.com - Long article on Docker.
Tuesday, September 10, 2013
Links for 09-10-2013
- 100 Best Apps For iPhone And Android - Business Insider - RT @alexfyost: A good read. Useful stuff. The App 100: The World's Greatest Apps via @sai
- 8 ways Boston can nurture — and keep — more startups — Tech News and Analysis - RT @sravish: 8 ways #Boston can nurture - AND KEEP - more #startups, by @gigabarb ->
- Techmeme Heds (nottechmeme) on Twitter - RT @Pogue: Do you read tech blogs? Then you’ll crack up at this parody Twitter account. Whoever you are, I think I love you.
- Lead, Follow. or...: VMware ruined my Summer - RT @DouglasOF: Damn VMworld: The new cadence of Enterprise IT marketing
- Why WiFi Could Be Both a Blessing and a Curse For The Internet of Things - RT @TonyUphoff: My pal David Berlid aka @dberlind is killing it at Programmable Web. Perfect fit:
- 36 Hours in the Brandywine Valley - NYTimes.com
- Apple TV: Building a ‘truly’ Smart TV system takes time | Know Your Mobile - "This, ladies and gentleman, is exactly why Apple hasn’t come out with a smart television yet. No one knows what one is or should do. To be sure, there are plenty of “smart TVs” already on the market. But that term should be used loosely."
- Romotive | The Fun Personal Smartphone Robot that You Can Program
- Out With the Old, In With the New. VMware’s New Reality and the Carnage to Come | The Diversity Blog - SaaS, Cloud & Business Strategy - RT @Staten7: Out With the Old, In With the New. VMware's New Reality and the Carnage to Come via @zite
- Podcast: Here’s what’s wrong with the Internet of Things — Tech News and Analysis - RT @gigastacey: Podcast: Here's what's wrong with the Internet of Things @om and I chat.
- What chip designers will do when Moore’s Law ends | VentureBeat - RT @deantak: What chip designers will do when Moore’s Law ends | VentureBeat via @VentureBeat
Hanging out about open clouds
James Maguire of Datamation moderated a Google+ Hangout panel yesterday with myself together with Devin Carlen (Nebula), Marten Mickos (Eucalyptus Systems), and Peder Ulander (Citrix) on the topic of open source clouds. Good discussion mixed in with just enough competitive jousting to keep things interesting.
Monday, September 09, 2013
Wrapping up VMworld 2013 in SF
Belated I know. But last week was spent balancing some high priority work with R&R and I wanted to give my first reactions a little time to settle anyway. At least that's my excuse and I'm sticking with it. So, without further ado, here were some things that either struck me or caught my eye at my seventh VMworld.
Disclaimer: I am Cloud Evangelist for Red Hat. Read what I write in that context. However, also please note that these opinions are mine alone and do not necessarily represent the position of my employer. I also note that I usually avoid getting into a lot of competitor specifics when I write. However, VMworld is a major event and I thought some may find my observations useful.
It was sober.
By which I mean it was more about product/initiative rationalization, integration, and extension than it was about bold claims and major new directions. Even that which was arguably most new, VMware NSX (network virtualization), was a largely anticipated consequence of VMware's Nicira acquisition. (Furthermore, based on a number of conversations I had at the show, it isn't clear to me that the typical attendee understands at this point what software-defined-networking means for them either technologically or organizationally. To be clear, I firmly believe that software-defined-everything is an important trend; I just think it will take time to develop.)
Sober isn't necessarily bad, by the way. In fact, the industry analyst who described this year's event to me thusly weigh meant it as something of a complement. The opposite of hype-to-excess. But it made for a general sense of VMware securing its core virtualization business and framing every big trend in the context of that existing business even though cloud computing, for example, is hardly just virtualization with a bit of management tacked on. Note, for example, how deliberately the VMware NSX moniker evokes VMware ESX.
Exploiting existing beachheads can be a good business and technology strategy. It's also a strategy that can fail miserably. (See, for example, Microsoft's determination to bridge from its Windows dominance to mobile.)
Speaking of which. "Defy Convention." Really?
I get that someone in VMware corporate marketing or some executive thought it would be a good idea to theme the show around the polar opposite of stodgy old VMware. The problem is that this brainstorm apparently began and ended with a slogan and some grunge graphics. All the actual content of the conference was about incrementally extending virtualization. A comforting message for the thousands of virtualization admins in attendance? Sure. A message about upending the status quo? Not so much.
(OK. VMware NSX--or at least what it represents--is threatening to network admins and to Cisco. But even most anything that whiffed too strongly of software-defined-storage was pretty low profile given how VMware parent EMC is still deeply in the business of selling expensive specialized storage hardware.)
The non-hybrid hybrid.
Just one example of not-defying-convention is VMware's approach to hybrid cloud. You can have any kind of hybrid cloud you want, private or public, so long as it's based on VMware technology. It has a certain logic from VMware's perspective. They've had no real success competing with non-VMware-based public cloud providers and are unlikely to change that using their proprietary software. What they can arguably do is provide a simple bridge for their installed base to a tightly circumscribed set of public cloud resources. (That these public cloud resources will be a combination of VMware data centers and those partners would seem to be a channel conflict issue but that's another topic.)
A useful incremental feature for a subset of their installed base perhaps. But hardly hybrid in any true meaning of the word. Chalk up another one for convention.
Pivotal MIA.
Finally I note that Pivotal, the Paul Maritz-led EMC initiative which includes CloudFoundry and other VMware pieces involved with developing cloud applications was largely absent from the show. It had a booth--plastered with various vague slogans around "making business faster" and such--and put out a short press release with VMware about expanding their "strategic partnership" but wasn't otherwise very visible.
I get the deliberate arms-length relationship that helps VMware focus on its core business while Pivotal works on cool new stuff. But one consequence is that VMware looks to be refocused on infrastructure and its historical comfort zone (and the comfort zone of many in attendance at VMworld) rather than the broader possibilities opened up by a complete cloud portfolio.
And that's about convention, not defying it.
Video: SolidFire's CEO David Wright talks flash storage and OpenStack
We also discussed SolidFire's integration with OpenStack. They joined the Red Hat OpenStack Partner Network back in June.
Flash-based storage is interesting in the context of cloud computing partly because overall performance typically much higher than rotating media (which, while continuing to increase in capacity, has largely plateaued in performance). However, as the SolidFire graphic below shows, flash can also help deliver predictable performance in concert with appropriate software hooks and management.
(At Red Hat, we've also been working on both quality-of-service and security for multi-tenant environments. We deliver Linux features such as cgroups and SELinux through Red Hat Enterprise Linux, which allows Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform to take advantage of them. Higher up the stack, Red Hat CloudForms hybrid cloud management provides policy-based quotas and other controls that can span a heterogeneous infrastructure. Those are just a few examples.)