Monday, October 27, 2014

Why is lobster "market price"?

The price of lobster, like the price of anything else, is set in a market. But the market price you pay is fundamentally a price determined by the restaurant market, not the market for lobsters. And the issue is a basic one of capacity and competition.

Think back to the Fisherman’s Friend and its excellent location. Stonington is a great place to visit. But it’s also a very small town. There aren’t very many places to eat. And if it’s a certain kind of coastal Maine seafood dinner experience you’re after, there aren’t any other places in town to go. There’s a little reason to fear losing customers to the boil-at-home option as lobster prices fall but no reason to worry about a nearly identical competitor next door poaching your customers. Nor is there a nearly identical competitor next door whose customers you might hope to poach with a discount.

Cooking and eating lobster at the house in Maine

I’ve noticed this frequently in Maine. The lobster price (especially for small, soft shell lobsters—i.e. the most advertised price) is a very competitive market-driven thing. The same boiled lobsters at lobster pounds are too because they’re pretty hard to decouple from the live and kicking versions. But lots of ot

her forms of lobster including lobster rolls and even refrigerated lobster meat tend not to drop accordingly.

It’s also worth noting, per another conversation I had recently, that it’s not immediately obvious why so many restaurants list their lobster as “market price” given that the price of many of their fish and other expensive ingredients presumably vary by season as well. My cynical nature wonders if this isn’t primarily a ploy to just not publish the price and use that lack of transparency to wrest a few extra dollars for a perceived luxury item. 

Friday, October 24, 2014

Review: Nixeus in-ear earphones/mic

Chart e1401383896513

Given the amount of traveling that I’ve been doing over the past couple of years, I decided to kick off a series on this blog taking a look at some of the (often morphing but fairly compact) pile of gear with which I travel. This is the inaugural post on this theme.

I favor big over-the-ear headphones when I’m editing podcasts at home. For travel? Not so much. Small and lightweight is the name of the game whether I’m plugging into a conference call or just listening to some music.

The Nixeus in-ear earphones are a nice example of compact earphones that can be used for either phone calls or listening. Their MSRP is $39.95 but they’re available for about half that on Amazon as of this writing.

They come with three sets (S, M, and L) of roughly cylindrical foam earbuds that you can use to tailor their fit. Like other earphones of this general type, the idea is to fit them relatively snugly into your ear—both to better block ambient sound and to keep them from falling out. From a fit perspective, I think of this type of design as something intermediate between iPod-style earbuds which just sit loosely in the ear and the silicone-style ear tips which you press fairly tightly into your ear canal. 

One of the challenges with reviewing this type of product is that fit and comfort are ultimately very much a matter of preference and the geometry of your particular ear. For extended music listening, I still prefer the silicone ear tip design such as Klipsch uses for its (significantly more expensive) X4i. On the other hand, I know a fair number of folks who just don’t like what they describe as “jamming” said silicone ear tips into their ear. 

What I can say is that the Nixeus earphones have a much more solid fit than do standard ear buds and, in part for this reason, their audio quality is commensurately better as well. The sound quality (both for the earphones and the mic) is as good or better than other examples of the same general design which I’ve tried.

Really, for the price, if you’re still using basic earbuds, give these or something else like them a spin. You’ll be glad you did. With Christmas coming up, it’s also probably worth mentioning that the Nixeus packaging is sleek and modern (with a magnetic closure for the box) so it looks like something costing more than it actually does.  

[Disclaimer: These earphones were provided to me for review purposes. No other compensation was provided and the opinions in this review are mine alone.]

Wednesday, October 22, 2014

Podcast: Private and hybrid storage sharing using ownCloud

ownCloud CTO and co-founder Frank Karlitschek sat down at CloudOpen in Dusseldorf to talk about how ownCloud lets companies offer their employees a "DropBox-like" experience while retaining control of where data is stored and how it is accessed. It's a hybrid approach to cloud storage that's can be important in a world where jurisdictional concerns can be a major CIO headache.

How ownCloud works in concert with Red Hat Storage Server
ownCloud whitepapers

Listen to MP3 (0:15:20)
Listen to OGG (0:15:20)

Wednesday, October 15, 2014

Links for 10-15-2014

Sunday, October 12, 2014

What do people mean by cloud security?

Security continues to top the charts when IT folks are asked what thing most gives them pause about using a cloud—especially a multi-tenant public one. This invites the retort: “Do they think you know how to better secure systems against attackers than Amazon?” Probably not. But “security” in this case often means something quite different than just keeping the bad guys out.

A general observation that isn’t particularly original. Back in 2011, I was writing about how cloud governance was about more than security. More recently, I’ve given many presentations delving into how cloud security was a much broader topic than just security classic.

But the extent to which cloud “security” goes beyond just security classic *most classic concerns still matter as well) was reinforced during a couple of sessions at 451 Research’s Hosting + Cloud Transformation Summit held in Las Vegas last week. And they provided some color about what people mean by that “security” word as well.

In his keynote, Research VP William Fellows reiterated that security—perceived and real—continues to come up regularly in cloud discussions. However, he went on to say that it’s actually jurisdiction which is the number one question. Perhaps not surprising really given the headlines of that the last year but it reinforces that when people voice concerns about security, they are often talking about matters quite different from the traditional Infosec headaches. (Attorney Deborah Salons sat down to do a podcast with me early last year on data governance issues. The link includes a transcript for those who prefer reading.)


Michelle Bailey, VP of Datacenter Initiatives and Digital Infrastructure, fleshed out these security concerns in more detail during her session. The question she was answering was a bit different: “What are the top three things that providers can do about security?” Presumably certain types of security concerns (e.g. malware in a company’s POS systems) aren’t something a provider could be expected to do a lot about. Nonetheless, I expect there’s a high correlation between someone being concerned with some aspect of security and valuing providers who can mitigate that risk.

Data locality comes up here too. This is a hot topic among cloud providers and one of the reasons, besides sheer volume, for their rush to build new data centers. In other words, people want to be able to choose, say, an Amazon region that is sufficiently constrained geographically from the perspective of judicial orders or other authority. It’s about knowing the laws to which they may be subject.

But broadly, I’d characterize the top wants as being fundamentally about visibility and control. Transparency, auditability, verifiable encryption, control over encryption. And indeed pretty much the whole rest of the list is either related characteristics or various standards and documentation to help ensure that cloud providers do the things they promise to do.

Conspicuously lacking is pretty much anything in the vein of physical security or DDOS mitigation or firewall configurations. That’s because, while important, they’re largely viewed as solved problems from the perspective of the cloud provider.

Mind you, given the shared responsibility model that comes into play when you use a cloud provider, you share responsibility for the workloads that you’re running on the cloud provider. You’re still running and patching the operating system running in the cloud. But you know how to do that; you basically do the same thing you do on-premise. (Obligatory plug for Red Hat Enterprise Linux and our Certified Cloud Provider Program here. I should have a new whitepaper out soon.) 

For these and other reasons, Michele concluded that “ the end game isn’t public cloud, it’s hybrid cloud. And you can bet on that for the next 5 years.” And that security, among other factors, will lead to hosting providers remaining a  "very long tail market” in which  messaging, targeting, and matching strengths with customer requirements will continue to offer many opportunities for differentiation. 

Sunday, October 05, 2014

Topsfield Fair

I went over to the Topsfield Fair on a drizzly Saturday. I'd never been to this one and hadn't been to the Bolton Fair (which is actually held in my town of Lancaster) for a few years. It is so New England-y!

Thursday, October 02, 2014

Links for 10-02-2014

Wednesday, September 24, 2014

Podcast: OpenShift Enterprise v3 with Joe Fernandes

In this podcast, Red Hat's Joe Fernandes talks about all the technologies and products that come together in OpenShift Enterprise v3, the upcoming on-premise version of Red Hat's platform-as-a-service. These include Red Hat Enterprise Linux and Project Atomic, Linux Control Groups, Docker, and Kubernetes.

Joe runs product management for OpenShift and he brings a great perspective on how these various capabilities dovetail with each other to deliver what's ultimately the most important thing: the developer experience.

Listen to MP3 (0:20:04)
Listen to OGG (0:20:04)


Gordon Haff:  Hi everyone. This is Gordon Haff in Cloud Product Strategy at RedHat here with another episode of the Cloudy Chat podcast.
Today I'm joined by Joe Fernandes who runs product management for OpenShift.
Gordon:  Today we're going to talk about OpenShift V3, the next upcoming version of OpenShift, which is Red Hat's enterprise PaaS.
Not so much to get into a lot of product details but because it's a convenient way to talk about all of the interesting new technologies that we're working on and which are coming into Red Hat today. OpenShift pulls these together.
Joe, at a top level why don't you lay out the agenda and then we can dive in some of the details?
Joe:  We've been out with OpenShift for quite some time. We actually launched it in developer preview over three and a half years ago now. Today we have both the commercial public PaaS service in OpenShift Online as well as a private PaaS product in OpenShift Enterprise.
As you mentioned OpenShift ties together a lot of different Red Hat and open source technologies and wraps a nice developer experience around it. OpenShift is built around Linux containers. It deals with orchestration of those containers at very large scale. It packages in a bunch of interesting developer services and, like I said, wraps that around a nice experience to accelerate application development and bring more agility to companies who are building applications.
Gordon:  Let's break that down into its components. First of all you got a platform there. Tell us a little bit about the underlying platform that OpenShift runs on.
Joe:  OpenShift is built on a platform of Red Hat Enterprise Linux. We make use of a lot of technology within RHEL to basically bring the OpenShift product and all of its capabilities to light. Today OpenShift is built on RHEL 6, but what we're working on in V3, as you allude to is bringing this on to RHEL 7 and to move to our new container model which will be based around Docker.
OpenShift will leverage the Docker capabilities that were introduced in RHEL 7 when it was launched in June. It will also be able to make use of the RHEL Atomic host, which is a new product that we've announced around the upstream Project Atomic.
RHEL Atomic host is not commercially available yet but Project Atomic is an active community that we've basically launched back in the Summit time frame in April to work on an optimized Linux host that's optimized around containerized applications. That's important to OpenShift because everything we run runs in Linux containers.
What we can deliver here with Atomic is a very lightweight host that's optimized for those environments but it also brings a new model for how you manage the hosts and you're managing it in an Atomic way as well.
Gordon:  What the interesting thing about Atomic is people out there say, "With platform-as-a-service the operating system doesn't matter any longer." VMware was on that particular kick a couple of years ago and they still don't believe the operating system is important. I wonder why.
One of the things we see with Atomic and with Red Hat Enterprise Linux more broadly is that as we are talking about these containers we're talking about the application packaging through Docker. The operating system is very much at the core of all that.
Joe:  Absolutely. Those applications have to run somewhere. They're running in Docker containers but those containers are running on a host OS. It's that host OS that's providing a lot of the capabilities like security, like isolation through kernel namespaces. Security, we do a lot of work with SE Linux to implement the layered security model. Like cgroups for resource confinement, all of that comes from the host operating system.
We have a long history of working on Linux kernel technologies going back more than a decade. A lot of that expertise is what we're bringing to not only Atomic but to communities like Docker and other communities in the Docker ecosystem and in the containers ecosystem.
Gordon:  Let's talk a little more about Docker and vis‑a‑vis application packaging specifically around Docker and how that relates to OpenShift and how it relates to our cartridge model.
Joe:  It's funny. A lot of people are starting to hear about Linux containers due to the popularity of Docker, but the underlying containers technology has been around for a while. Things like Linux control groups, as I mentioned, kernel namespaces. We've been using those for years in OpenShift and even further back our customers have been using those on RHEL 6. Companies like Google have been using containers technology at scale for a long time as well.
What Docker really brings is a new packaging model for applications that run inside containers and enables, what we refer to as, application‑centric IT. This packaging model that basically starts with a Docker file that packaged into an Atomic Docker image brings things like portability of your applications across different environments.
The Docker image runs everywhere that a compatible Linux host would run. As people know Linux runs everywhere. That means you'll be able to take that image and run it not only in the public cloud in app providers like Google and Amazon who have already announced support for Docker but in your private cloud as well regardless of whether that private cloud is built on OpenStack or VMware virtualization technologies or even on bare metal servers.
It spans all the footprints from bare metal to virtualization to private cloud and even public cloud. That's powerful.
Some of the other things that we see are the benefits that it brings to development and to the operations team that runs the PaaS environment. When we launched OpenShift more than three years ago we decided to build it around containers because of the speed with which you could deploy applications within those containers.
When a developer comes to OpenShift they can spin up applications in seconds. That wouldn't be possible if we were spinning up a full guest VM for each application or a group of VMs for larger applications where you have to bring up an OS within those VM guests. The speed and agility that containers enable for developers is very exciting.
It also brings a lot of efficiencies for IT both in making more efficient use of the infrastructure that it runs on, getting more density of applications per host but also making it more efficient to manage because you're managing a single host kernel that's now running multiple applications as opposed to managing a host kernel and then guest OSs for each VM.
All these capabilities like I mentioned come not just from Docker but from the broader containers movement. It's exciting to see so many people getting involved in that movement and getting new capabilities being introduced in this area both from Red Hat and ISVs like Docker.
Gordon:  OpenShift offers features like automated scaling that can span a number of servers, OpenShift can run a large population of VMs or bare metal servers, your choice where to run it. Maybe talk a little about where we're going with handling that multi‑host server environment.
Joe:  As you mentioned, containers are great but applications don't run in a single container. Typical applications that we see in OpenShift will span multiple containers. They don't all just run on one host they're going to span multiple hosts.
In our OpenShift online environment, for example, we have hundreds of VM instances hosting hundreds of thousands of applications. Each application may have a handful to a large number of containers that form that stack. This is all an orchestration and scheduling challenge and it's the role of the OpenShift broker.
When we decided to architect OpenShift V3 one of the key decisions was moving to the Docker standard for containers. We were excited to see a standard building around containerization. What we also have decided is to take our orchestration technology and work with Google on the Kubernetes project.
Kubernetes brings what we think will be very exciting web scale orchestration capabilities to OpenShift V3 and to the broader open source community. We've joined forces with Google and a number of other customers around orchestrating Docker containers via Kubernetes.
It gives us the model for how these application stacks are built through concepts like Kubernetes pods deployed across different instances and allows us to do things like connect a container running your application. Say it's a JBoss application or Tomcat application to another container that maybe running your database tier. Or connecting it to a cluster of database instances that are each running in their own container, or scale up a cluster of Tomcat or Python or PHP instances, connect that to a load balancer or a web tier.
All these things are different containers that need to be orchestrated and then deployed. That's where orchestration comes in in the OpenShift V3 model.
Gordon:  Let's switch gears a little bit from the underlying technology enabling OpenShift to what's happening in the services that run on OpenShift with the ecosystems associated with OpenShift. What's happening there?
Joe:  This gets to the heart of what matters to developers. All that underlying stuff ultimately is what enables developers to build the applications that they care about. In order to do that you need to be able to support the languages, the frameworks that they want to build those applications in and be able to provide the other services their applications require whether that's databases or messaging or other middleware, even continuous integration and other capabilities.
The packaging that we have in OpenShift for that today is called OpenShift Cartridges. In V3 that packaging will be based on the Docker images. This is the biggest thing we saw in Docker was a very flexible and powerful packaging format for these application components and then a very broad ecosystem in the Docker hub where literally thousands of these component images exist.
Whether you are looking for a Ruby stack or a Python stack or you're looking for an image from MongoDB or MySQL or what have you, you can find not just one but hundreds of examples of these services in the Docker hub today. Our goal in OpenShift V3 is to allow you to run arbitrary images from that ecosystem in your OpenShift environment.
A couple of things there. We're going to start with our own products. We in OpenShift we leverage technology from our JBoss middleware division as well as technology that's packaged up for RHEL and as part of Red Hat software collections to provide some of the things that come out of the box when you use OpenShift or purchase OpenShift Enterprise. Things like JBoss, things like Tomcat, we're about to launch a new Fuse cartridge and so forth. We're packaging all of our existing products as Docker images so that they can be available as services within OpenShift V3.
What we're also doing is working with ISVs on certifying their images through our Red Hat container certification program to also run on not only RHEL but products like OpenShift that build on top of RHEL and RHEL Atomic. What this does is it enables enterprises to know what's the supported option or what is the safe choice if I'm looking for a particular stack.
They know that that's a good option because that's something that's certified not only by the ISV but also by Red Hat. That's the goal there. But ultimately the end goal is to provide as many services as we can to show that there's a limitless number of applications that customers can run on OpenShift and deploy and manage there as well.
Gordon:  To finish things up here talk a little bit about for what all this means for the developer experience because that's really the name of the game.
Joe:  The developers care about what you have available, what's in their palette to build and then what you enable for them to build it. Is this a familiar environment, do they have the development tools, the debugging tools that they need to build their applications in a way that's natural for them.
We've focused on giving different developers different interfaces to access OpenShift. That includes our web console, our command line interface, and also various IDEs. The Eclipse IDE through JBoss Developer Studio as well as support for other IDEs like IntelliJ, like Appcelerator and so forth that comes from our ISV partners that we work with.
That's a great starting point. From those interfaces developers can work on their code and then push that code directly to their OpenShift environment. We also then recognize that the developer, again, they need the container to provide the stack but what they care about is their code.
If they are a Java developer they care about their Java code. If they are a Ruby developer they care about their Ruby code. OpenShift basically allows the developer to take their code in whatever repository it lives in and then push that code directly to the platform, automatically have that compiled and running in a container and accessible in the application.
You can also push binaries as well, but OpenShift is unique in allowing you to push either source code where we build it, we manage all your dependencies, we deploy the application or push already configured binaries. That's going to continue in V3 through our integration with Git and the ability to essentially do a Git push of your code and have it automatically rebuild your container image, your Docker image with that code combined with the underlying stack whether it's JBoss or Apache or whatever stack is running it.
Lastly is debugging. Some popular features include being able to do debugging direct from Eclipse, which you can do in OpenShift. Features like port forwarding, features like being able to SSH directly into your containers, into your gears and being able to actually work directly with the underlying runtimes, or run something like a database management tool to work with your database instances and so forth.
These are things that are all part of the developer experience that people value today in OpenShift and that we're bringing forward and continuing to build on as we move forward in OpenShift V3.
Gordon:  By the way, if there are any developers listening in in this and you haven't tried OpenShift online, why not? It's free, no credit card, no anything like that. It's so simple even I was able to develop an application with it.
Joe:  That's the beauty. We get a lot of benefit by basically seeing both sides. In OpenShift Online, as you mentioned, any developer can come to and with just an email address sign up for an account and get free access to up to three gears or three containers where they can host one to three applications, and those are gears that don't expire.
Then we have commercial plans. As you expand and need more capacity or if you want professional support you can expand into one of our commercial tiers in the online service. That also provides great value to our OpenShift Enterprise customers because all of the things that we learn by serving the online developers and running the platform ourselves at scale it feeds back into our OpenShift Enterprise product which we then package up and then ship to customers who want to deploy their own PaaS.
In a sense they're the administrator, our enterprise customers are deploying and administering OpenShift itself and they learn from talking to our own administrators and benefit from the features and enhancements that we put into OpenShift to run our own online service.
We really are eating our own dog food at large scales and then able to run those businesses in parallel to mutually benefit each other.
Gordon:  I should mention that, because otherwise I'm going to get into a lot of trouble with our fine community manager, that OpenShift Origin is the open source project. It's the upstream version.
Joe:  That's right. Every commercial product at Red Hat is based on an open source upstream community project. That's where all our development happens, that's where the innovation lives. OpenShift Origin serves as the upstream for both OpenShift Online and OpenShift Enterprise.
Our community isn't just limited to Origin as we've already discussed. Origin actually ties in other communities. On the host side it ties in communities like Project Atomic, like Fedora, even CentOS. On the container side the Docker community is a community that we participate heavily in and we pull that into Origin.
I already mentioned on the orchestration side we're members of the Kubernetes community and developing there. We're not just pulling the stuff in. Red Hat, which as has noted in a couple of articles recently, is actually one of the leading contributors to projects like Docker as well as projects like Kubernetes.
We're bringing the code versus just talking about these things or forking or pulling in these projects. We're working closely with each of these communities and those are the communities that make up OpenShift. Origin is upstream but Origin is at the center of a lot of other important communities.
I failed to mention the many JBoss communities that provide a lot of the middleware. Things from the ecosystem, whether it's WildFly on the application server side or the Fuse ecosystem on the integration and others as well.
It's exciting to work with so many vibrant communities and interesting to work on a product that pulls all these things together into a cohesive solution for our end users.
Gordon:  Thanks a lot, Joe. Anything we missed?
Joe:  For folks who aren't familiar with OpenShift I encourage you to sign up for a free account. Give us a try. If you're looking for more information feel free to reach out to us through
We're excited to have more folks involved in what we're doing in V3. We have some information on our website on how you can get a look at our latest and greatest stuff that's built around Docker and Kubernetes and Atomic and some of the things we've discussed here.

There'll be a full beta this fall but there's already code in the Origin upstream that you can download and try. We're looking at commercial availability on that platform some time next year.

Photos from Bali this summer

Hike up Gunung Batur for sunrise. I was in Bali on business in June and I finally got around to processing the pics I took in a few days before and after the event. Many more up on flickr.

What's up with me this fall?

I’m been busy in September but have minimized the travel and even local events, in part to gain some focus time on a number of projects I needed to bang through. With October ‘round the bend though, I’ll be heading into the wild blue yonder again for speaking engagements and other purposes. My current schedule looks like the following. Feel free to reach out if you want to meet, record a podcast, or just have a beer.

  • Monktoberfest, October 2nd and 3rd, Portland ME.
  • 451 Research Hosting and Cloud Transformation Summit, October 6-8, Las Vegas, NV
  • CloudOpen/LinuxCon, October 13-15 Dusseldorf, Germany. I have two sessions. What Manufacturing Teaches us about DevOps and  The Cloud in 10,000 Words (or 10 Pictures). The former is a largely new presentation that takes a broad look at the ways in which manufacturing has evolved find parallels (and lessons for) DevOps. It’s a broader take on the process than just relatively recent lean manufacturing approaches and the like (i.e. Deming et al.). The latter is an update to the session that I gave at CloudOpen Chicago in August which looks at some of the important trends around cloud and related technologies (Big Data, IoT). 
  • CloudExpo, November 4-6, Santa Clara, CA. I’ll be doing a variant of What Manufacturing Teaches us about DevOps. I have passes for the show if you need one.
  • Amazon re:Invent, November 11-14, Las Vegas, NV. 
  • Cloud Law European Summit, November 25, London, UK. I’ll be keynoting on The Hybrid Cloud-the future of cloud computing. 

Monday, September 22, 2014

Links for 09-22-2014

Tuesday, September 09, 2014

The dark side of language diversification

My former colleague RedMonk's Stephen O’Grady writing about the diversification of language options and what might follow:

It may be difficult to conceive of a return to a more simple environment, but remember that the Cambrian explosion the current rate of innovation is often compared to was itself very brief – in geologic terms, at least. Unnatural rates of change are by definition unnatural, and therefore difficult to sustain over time. It is doubtful that we’ll ever see a return to the radically more simple environment created by the early software giants, but it’s likely that we’ll see dramatically fewer popular options per category.

Whether we’re reaching apex of the swing towards fragmentation is debatable, less so is the fact that the pendulum will swing the other way eventually. It’s not a matter of if, but when.


In the past, I’ve mildly disagreed with Stephen and his colleague Donnie Berkholz about whether we were really seeing anything more than the usual plethora of languages that see some use and even some hype but don’t have any real impact and eventually fade away. Most of the languages we use today would have been at least passingly familiar to Web 1.0 and enterprise programmers programmers of the dot-com era even if the mix has shifted over time.

However, Stephen and Donnie’s data has made me at least tentatively come to agree that there’s been an increase in fragmentation. Serious infrastructures, platforms, and apps being created with “non-mainstream” languages such as Scala. Go, from Google, is probably the latest major new entry; it’s used in Docker which is one of the hottest software projects happening right now.

Arguably, fragmentation doesn’t matter so much in a microservices world in which all manner of gratuitous differences can be abstracted from the underlying platform. And developers are supposed to be in control after all, aren’t they?

But as Stephen notes, quoting from a number of senior developers, there are limits to all this. Tim Bray, for example, points out that having to stay current in an overly broad toolchain takes away from having the time and attention to actually drain the swamp. And just because you can abstract gratuitous differences from the underlying platform doesn’t make gratuitous differences a good thing. Code usually needs to be maintained after all. 

There great choice of tools out there. And it’s important to support that choice in programming platforms (as Red Hat does with our OpenShift PaaS). But, at the same time, that choice can offer a lot of rope and it’s up to developers and programming teams to use that rope for good and not for ill. 

Image source:

Send a fax from the beach

There are things in here which are pretty spot on like the electronic tolls--probably aided by the fact that there were nascent working examples at the time the commercials were made. In others there are delightfully amusing anachronisms. For example, wirelessly connecting from the beach--to send a fax. Virtually tucking a baby in with a Facetime-like video call--from a phone booth.

Fascinating what it gets pretty spot-on and what seems delightfully archaic (e.g. fax from beach), "FaceTime" from a phone booth.

Monday, September 08, 2014

Disjoint Sets

Video: Is DevOps changing enterprise IT?

Prior to the last CloudExpo in New York, we trudged through the pouring rain to record some video panels. (There wasn't a taxi to be had and even Uber had a long wait even at 3x surge pricing.) But we kept things short and to the point and I think you'll find this worth your time.

Tuesday, August 12, 2014

Links for 08-12-2014

Thursday, August 07, 2014

Why the OS matters (even more) in a containerized world

Red Hat Project Atomic Introduction

My former colleague (and frequent host for good beer at events) Stephen O’Grady of RedMonk has written a typically smart piece titled “What is the Atomic Unit of Computing?” which makes some important points.

However, on one particular point I’d like to share a somewhat different perspective in the context of my cloud work at Red Hat. He makes that point when he writes: "Perhaps more importantly, however, there are two larger industry shifts at work which ease the adoption of container technologies… More specific to containers specifically, however, is the steady erosion in the importance of the operating system."

It’s not the operating system that’s becoming less important even as it continues to evolve. It’s the individual operating system instance that’s been configured, tuned, integrated, and ultimately married to a single application that is becoming less so. 

First of all, let me say that any differences in perspective are probably in part a matter of semantics and perspective. For example, Stephen goes on to write about how PaaS abstracts the application from the operating system running underneath. No quibbles there. There is absolutely an ongoing abstraction of the operating system; we’re moving away from the handcrafted and hardcoded operating instances that accompanied each application instance—just as we previously moved away from operating system instances lovingly crafted for each individual server. Stephen goes on to write—and I also fully agree—that "If applications are heavily operating system dependent and you run a mix of operating systems, containers will be problematic.” Clearly one of the trends that makes containers interesting today in a way that they were not (beyond a niche) a decade ago is the wholesale shift from pet operating systems to cattle operating systems.

But—and here’s where I take some exception to the “erosion in the importance” phrase—the operating system is still there and it’s still providing the framework for all the containers sitting above it. In the case of a containerized operating system, the OS arguably plays an even greater role than in the case of hardware server virtualization where that host was a hypervisor. (Of course, in the case of KVM for example, the hypervisor makes use of the OS for the OS-like functions that it needs, but there’s nothing inherent in the hypervisor architecture requiring that.)

In other words, the operating system matters more than ever. It’s just that you’re using a standard base image across all of your applications rather than taking that standard base image and tweaking it for each individual one. All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. 

To Stephen's broader point, we’re moving toward an architecture in which (the minimum set of) dependencies are packaged with the application rather than bundled as part of a complete operating system image. We’re also moving toward a future in which the OS explicitly deals with multi-host applications, serving as an orchestrator and scheduler for them. This includes modeling the app across multiple hosts and containers and providing the services and APIs to place the apps onto the appropriate resources.  

Project Atomic is a community for the technology behind optimized container hosts; it is also designed to feed requirements back into the respective upstream communities. By leaving the downstream release of Atomic Hosts to the Fedora community, CentOS community and Red Hat, Project Atomic can focus on driving technology innovation. This strategy encompasses containerized application delivery for the open hybrid cloud, including portability across bare metal systems, virtual machines and private and public clouds. Related is Red Hat's recently announced collaboration with Kubernetes to orchestrate Docker containers at scale.

I note at this point that the general concept of portably packaging applications is nothing particularly new. Throughout the aughts, as an industry analyst I spent a fair bit of time writing research notes about the various virtualization and partitioning technologies available at the time. One such set of techs was “application virtualization.” The term governed a fair bit of ground but included products such as one from Trigence which dealt with the problem of conflicting libraries in Windows apps (“DLL hell” if you recall). As a category, application virtualization remained something of a niche but it’s been re-imagined of late.

On the client, application virtualization has effectively been reborn as the app store as I wrote about in 2012. And today, Docker in particular is effectively layering on top of operating system virtualization (aka containers) to create something which looks an awful lot like what application virtualization was intended to accomplish. As my colleague Matt Hicks writes:

Docker is a Linux Container technology that introduced a well thought-out API for interacting with containers and a layered image format that defined how to introduce content into a container. It is an impressive combination and an open source ecosystem building around both the images and the Docker API. With Docker, developers now have an easy way to leverage a vast and growing amount of technology runtimes for their applications. A simple 'docker pull' and they can be running a Java stack, Ruby stack or Python stack very quickly.

There are other pieces as well. Today, OpenShift (Red Hat’s PaaS) applications run across multiple containers, distributed across different container hosts. As we began integrating OpenShift with Docker, the OpenShift Origin GearD project was created to tackle issues like Docker container wiring, orchestration and management via systems. Kubernetes builds on this work as described earlier.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, and much more lightweight. But they’re still running on something. And that something is an operating system. 

[Update: 8-14-2014. Updated and clarified the description of Project Atomic and its relationship to Linux distributions.]

Tuesday, August 05, 2014

Podcast: OpenShift Origin v4 & Accelerators with Dianne Mueller

OpenShift Origin v4 has a variety of new features including native .Net applications support and Puppet-based High-Availability deployments. There's also a new Accelerators program to mentor community members who want to speak about and run events related to OpenShift Origin.


Listen to MP3 (0:11:09)
Listen to OGG (0:11:09)


Gordon Haff:  Hi everyone. This is Gordon Haff, in the Cloud Product Strategy group at Red Hat. I'm sitting here at OSCON with Diane Mueller, who is the community manager for OpenShift Origin. Welcome, Diane.
Diane Mueller:  All right. Thanks again for having me, Gordon. I'm totally please to be here again with you, and I'm totally stoked about what we've just kicked out the door last week.
Gordon:  What did you guys kick out the door last week?
Diane:  It is release 4.0 of OpenShift Origin. OpenShift Origin, if you don't know it, is a platform‑as‑a‑service. It's an open source project that's sponsored by Red Hat. I'm here at OSCON talking about deploying it on OpenStack. What we're deploying right now is the new release, which has lots of great new features, and there have been some amazing community contributions.
This release includes support for .NET. That's like the word that never gets said inside of Red Hat, .NET support. Thanks to our friends at Uhuru Software, we now have enterprise production‑ready Gear support for Visual Studio. I have the demo. ‑‑ Oh, my god. I do now have a Windows box to do that on.
Uhuru Software did a great native .NET implementation, so we have that support now. The folks at Cisco, Daneyon Hansen, a big shout out to him. I did a whole bunch of Puppet‑based high availability deployment scripts, which have been incorporated by Harrison Ripps, who's on OpenShift as the technical lead for the open source Origin project.
He's incorporated them into Now, not only can you do very simplistic or very complicated deployments with, but you can also do HA deployments, which is totally cool.
We added in central and consolidated logging support, zones and regions, placement policy extensibility, a node watchman service, all kinds of really cool things have been added into Origin release. You can get all of that if you go to, or if you go straight into the site.
Check it out today. I really encourage you to do that, and give us your feedback. Origin, whether you're a developer using it, there's a lot of documentation there. If you're a system administrator, you're going to find lots of things to like in the new release.
We're really very proud of what we've done, and what the community's contributed to this release. It's been amazing. It's been an amazing ride, so that's been really, really cool.
Gordon:  What other new stuff is going on? That sounds like quite a bit by itself...
Diane:  [laughs]
Gordon:  ...but I know you've been working on some other things in your spare time.
Diane:  Yes. The other thing is, I don't scale. I found that out. I've been traveling a lot lately. Been down to Brazil, then to Europe, and all over the world, all over North America. Preaching the gospel of open source and OpenShift, and working and connecting all of the different parties.
What we've done is we've done sort of a riff on the Fedora Ambassadors program. We're launching, next week, the OpenShift Accelerators program. You get that car metaphor, gears and shifting, and accelerators. We're creating a program for mentoring people, to giving them all the tools that they need to set up user groups, locally.
We'll even give you money for pizza and swag. But this is not about swag. This is really about getting the skills to talk about OpenShift, to demo it. If you're interested in this program, you can go to and see all of the prerequisites for joining.
There are a lot of people out there besides me and the Evangelist team that have given presentations. We're going to gather all of that, but it in GitHub, create speaker notes, create some good sample apps, and we're going coach people. Here at OSCON I got to mentor our very first accelerator Alex Barreto, who probably could have done without the coaching, but hey.
He's now prepped up to do presentations on OpenShift on OpenStack, so if you're looking for someone to speak on that topic you don't have to just call me. If you're looking to spin up a user‑group meeting, like Mateus Caruccio has done down in Brazil with the Getup Cloud. He's one of the contributors. They've flown up there.
Angel Rivera has hosted user group meetings. What we're really trying to do is scale the people who can go out and talk about OpenShift, and give them the tools to be more effective and, you know, some pizza money, and make sure that we coordinate all that with an events calendar, so that we know where everybody is and we can help promote those events.
If you're interested in this program, again, reach out to me on Twitter @PythonDJ, or go to the page, sign up, and request a mentor. We would be happy to get you into the Accelerator program.
Gordon:  What's coming down the road, now that you've got this under your belt?
Diane:  There's so much going on. That's why the accelerator program is, all of the interrelated projects that OpenShift consumers, from Docker to Google Kubernetes to Project Atomic, there's so many different communities that we touch, the scaling is one of our biggest issues.
To be able to do a good job of educating people on all of these new technologies, and how they're being incorporated into OpenShift, and how OpenShift leverages them. If I have to be an expert on SE Linux, ActiveMQ, memchached, and Docker, and OpenStack, and ManageIQ, it just doesn't scale. My brain explodes when I start thinking about all the different topics that we get requests to talk on.
So this fall, stay tuned. There is going to be a huge riff of new technologies being brought into the OpenShift umbrella, and we'll have lots of things that you'll need to get up to speed on. So, we will be broadcasting that information out very shortly, and just keep in touch and keep listening to Gordon's podcast, because I'll be back here, again, very soon.
Gordon:  Yeah. I find it amazing, the last year or two in particular. Probably even just the last year, this explosion of technologies, approaches coming in. And everything touches everything else. I think containers, although it's not a totally new concept, Docker making containers more consumable. It's one of the really important changes that are happening in the Cloud space, and really PaaS is one of the things that drove that originally.
Diane:  Yeah.
Gordon:  And just all the orchestration associated with practically scaling up applications and groups of workloads, it's just an awful lot of stuff to absorb.
Diane:  And I think the beauty of it all, I think the reason why Red Hat succeeds in the spaces we have a very strong philosophy of, not invented here is not an option. Other organizations like Google and Kubernetes and Twitter and Mesos, and Docker are external from Red Hat. We contribute to them, and we collaborate with those communities, but we don't have to dominate them.
It doesn't have to come from within Red Hat to be incorporated into the OpenShift project. And we're really clear that you can't be, the only way open source works is if it's a collaboration. And so, often you'll hear me say "proudly found elsewhere", or PFE. And that's the way that I think Open Source really works, and the way the technologies really advance. And that's what PaaS brought to the table, was a value proposition for orchestration.
And what we brought with OpenShift, I think, was a great number of concepts that people have adopted. And now what we're seeing is some of those concepts being commoditized. And so rather than maintaining a wheel that's proprietary‑ish, even though it's open source, embracing things like Google Kubernetes and Docker, and the next iteration of OpenShift leveraging those.
It's not that it lessens the value proposition of OpenShift, what it does is it extends the community. We get to now say "Yeah, Google Kubernetes, they're working on OpenShift."
Gordon:  I probably should mention here, if we're scaring away any listeners, apart from you and I, my perspective, we need to know how this stuff all works underneath the covers, at least some level. But actually, one of the beauties of OpenShift ‑‑ if you use the online service or if you use OpenShift Origin, that a system admin type has set up ‑‑ is that you as a developer can really be abstracted from an awful lot of this.
Diane:  Yes. We're bandying about a lot of names of projects here. To put it in context, you use an Internet browser, you go to a web page, you do not know what JavaScript is. You do not know, hopefully, too much HTML5 or CSS. You just use it, you use the web, and from a developer's point of view, all of these technologies that are under the hood at OpenShift, they'll just use it. It'll get deployed, rolled out, managed, and auto‑scaled for you, as a developer. And from an administrator's or the SysAdmin's side, who's administering the platform‑as‑a‑service, those are abstractions as well. You're just managing the platform‑as‑a‑service, not all the pieces and parts. That's the value proposition of platform‑as‑a‑service.
Gordon:  Great. Lots of exciting new stuff. I look forward to digging into this myself.
Diane:  All right. Glad to be here and we'll be back again soon.

Gordon:  Thanks Diane. Thanks everyone.