Wednesday, January 21, 2015

Don't skeuomorph your containers

Containers were initially pitched as more or less just another form of partitioning. A way to split large systems into smaller ones in which workloads not requiring a complete system by themselves could coexist without interfering with each other. Server/hardware virtualization is the most familiar form of partitioning today but, in its x86 form, it was only the latest in a long series of partitioning techniques initially applied mostly to mainframes and Unix servers. 

The implementation details of these various approaches differed enormously and even within a single vendor—nay, within a single system design—multiple techniques hit different points along a continuum which mostly traded off flexibility against degree of isolation between workloads. For example, the HP Superdome had a form of physical partitioning using hardware, a more software-based partitioning approach, as well as a server virtualization variant for HP-UX on the system’s Itanium processors. 

But, whatever their differences, these approaches didn’t really change much about how one used and interacted with the individual partitions. They were like the original pre-partitioned systems, There were just more of them and they were correspondingly smaller. Indeed that was sort of the point. Partitioning was fundamentally about efficiency and was logically just an extension of resource management approaches that allowed for the co-existence of multiple workloads historically .

Ohc breakout 02

At a financial industry luncheon discussion I attended last December, one of the participants coined a term that I promptly told him I was going to steal. And I did. That term was “skeuomorphic virtualization” which he used to describe hardware/server virtualization. Skeuomorphism is usually discussed in the context of industrial design. Wikipedia describes a skeuomorph as "a derivative object that retains ornamental design cues from structures that were necessary in the original.” The term has entered the popular lexicon because of the shift away from shadows and other references to the physical world such as leather-patterned icons in recent versions of Apple’s iOS

However, the concept of skeuomorphism can be thought of as applying more broadly—to the idea that existing patterns and modes of interaction can be retained even though they’re not necessarily required for a new technology. In the case of “skeuomorphic virtualization,” a hypervisor abstracts the underlying hardware. While this abstraction was employed over time to enable new capabilities like live migration that were difficult and expensive to implement on bare metal, virtualized servers still largely look and feel like physical ones to their users. Large pools of virtualized servers do require new management software and techniques—think the VMware administrator role—but the fundamental units under management still have a lot in common with a physical box: independent operating system instances that are individually customizable and which are often relatively large and long-lived. Think of all the work that has gone into scaling up individual VMs in both proprietary virtualization and open source KVM/Red Hat Enterprise Virtualization. 

In fact, I’ll go so far as to argue that the hardware virtualization approach largely won out over the alternatives of the time in c. 2000 because of skeuomorphism. Hardware virtualization let companies use their servers more efficiently by placing more workloads on each server. But it also let them continue to use whatever hodgepodge of operating system versions they were using and to continue to treat individual instances as unique “snowflake” servers if they so chose. The main OS virtualization (a.k.a. containers) alternative at the time—SWSoft’s Virtuozzo—wasn’t as good a match for highly heterogeneous enterprise environments because it required all the workloads on a server to run atop a single OS kernel. In other words, it imposed requirements that went beyond the typical datacenter reality of the day. (Lots more on that background.)

Today, however, as containers enjoy a new resurgence of interest, it would be a mistake to continue to treat this form of virtualization as essentially a different flavor of physical server. As my Red Hat colleague Mark Lamourine noted on a recent podcast:

One of the things I've hit so far, repeatedly, and I didn't really expect it at first because I'd already gotten myself immersed in this was that everybody's first response when they say, "Oh, we're going to move our application to containers," is that they're thinking of their application as the database, the Web server, the communications pieces, the storage.They're like, "Well, we'll take that and we'll put it all in one container because we're used to putting it all on one host or all in one virtual machine. That'll be the simplest way to start leveraging containers." In every case, it takes several days to a week or two for the people looking at it to suddenly realize that it's really important to start thinking about decomposition, to start thinking about their application as a set of components rather than as a unit.

In other words, modern containers can be thought of and approached as “fat containers” that are essentially a variant of legacy virtual machines. But it’s far more fruitful and useful to approach them as something fundamentally new and enabling that’s part and parcel of an environment including containerized operating systems, container packaging systems, container orchestration like Kubernetes, DevOps practices, microservices architectures, “cattle” workloads, software-defined everything, and pervasive open source as part of a new platform for cloud apps. 

 

 

Wednesday, January 14, 2015

Links for 01-14-2015

Wednesday, January 07, 2015

Photo: Zabriskie Point from last fall

After Amazon re:Invent in Las Vegas I spent a few days in Death Valley (which is one of the few redeeming things about going to Las Vegas). On my last morning, got an interesting mix of sun and clouds. Zabriskie Point was actually supposed to be closed for various reconstruction work but the closing had been moved out a month.

Podcast: Containerized operating systems with Mark Lamourine

Packaging applications using containers is a hot trend that goes well beyond containers as just operating system virtualization. In this podcast, we discuss the benefits of a containerized operating system like Red Hat Enterprise Linux Atomic Host, how it works from a technical perspective, and how containers aren't just another take on virtualization.


Listen to MP3 (0:17:08)
Listen to OGG (0:17:08)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff with the New Year's edition of the Cloudy Chat Podcast. I'm once again here with my colleague, Mark Lamourine.
Welcome, Mark.
Mark Lamourine:  Hello.
Gordon:  We've been talking a lot about containers and the orchestration of containers. For this session, we'll talk about where those containers run. We're going to talk a little bit about the generic containerized operating systems, and then get into some specifics about the Red Hat Enterprise Linux 7 Atomic Host which is now in beta.
We have an upstream project, Project Atomic, and we have the beta of our commercial offering. For the rest of this podcast, we're going to talk about Atomic or Atomic host or RHEL Atomic, and that's just shorthand for this technology in general. Mark, maybe you could start us off with talking a little bit about the idea behind container hosts in general.
Mark:  One of the important things about containers is that they make it possible to do some things that you can't do if you're running on the bare host. It allows you to include libraries and things that might not be resident on every host. You don't have to worry about it. As the application designer, you just include the parts you need.
They run inside this virtual container. One of the things people noticed right away is that if you start doing this, suddenly a lot of the things that are on the host on a general purpose host aren't really necessary there because the containers all bring along whatever they need. It very quickly became evident that you could pare out a lot of the general purpose applications leaving only a minimal host which is designed specifically to run containers.
Gordon:  Because you can basically put the specific things that a given application needs, package it up, and essentially be part of that application.
Mark:  It turns out that once you have a set of containers, the way you work with them is merely to start and stop the containers. You don't have to run a lot of other commands to make the applications run. There's no point in having those commands there at all.
Gordon:  This really comes back to the theme of one of our earlier podcasts that we did towards the end of last year that effectively containers have almost evolved from this thinking around being a way of virtualizing the operating system. Which they are from a technical standpoint, but the thing that's really interesting people about them is they're a way to essentially virtualize and package up applications.
Mark:  That's really one of the more critical aspects of containerization is it's really a new software delivery mechanism. We started off with tarballs and GZIPs and graduated, although people curse them sometimes, to packaging systems, whether it's RPM, Debian or a SysV packagingon Solaris.
We've got some other things if you go to different languages. Ruby has its own Ruby packaging mechanism, the Ruby Gem. Python has their own. But each of those is language specific and application specific. They load more stuff on your host.
What containers bring is the ability to keep your host clean, to not have all of that extra stuff burdening the host that's running the application. Those parts are actually in the container with your application. Your host doesn't even have to know they're there.
Gordon:  Conceptually, everything we've been talking about really applies to modern containers as a general concept. Mark, may we talk about what some of the differences or flavors or differing approaches or philosophies there are around some of the different containerized operating system approaches out there?
We've got Atomic. We've got CoreOS. There are various other types of projects that are in the works.
Mark:  In some cases, they're very similar. They're all getting at the idea of a very minimalized base operating system that is designed and tuned for running containers. CoreOS had that where before Docker, they had a means of logging on and just running individual pieces, but Docker was the thing that really brought it all to life by making it is so that you could create new containers easily.
You could create images easily, then instantiate them easily, and it created an ecosystem that has started really driving this concept. At their base, they're very similar. They do have some differing philosophies when it comes to management. That's, I think, where some of the differentiation is going to come in.
Gordon:  Could you maybe go into some details about that about, about how, let's say, Atomic does things differently from CoreOS, for example?
Mark:  Atomic started with a couple of different projects. It started first with Colin Walters' OSTree. One of the ideas about these containerized hosts is that because you are not installing a bunch of applications and then having to maintain them each individually, you can create a system where you can do your updates to the operating system. You can do them and be able to roll back.
With a RPM‑based maintenance system, once you install the RPMs or the Debian packages, you can remove them, but you can't easily go back to the previous state. Both CoreOS and Project Atomic have this idea that when you replace or update the operating system image, you do it in an Atomic fashion. You're doing this in a way which allows you to have nearly perfect roll back to a previous state.
Now they have very different ways of accomplishing this, but, in essence, they have the same goal. That's actually a secondary goal to the container host, but it's something that becomes possible and reasonable when you have a fairly small host versus a more generalized one which has lots and lots of packages to maintain.
Gordon:  Could you describe under the cover how Atomic is doing things?
Mark:  The way Atomic does this is that the author of OSTree, Colin Walters, created a mechanism where instead of having a single file system tree, he has a hidden file system tree that's controlled by the boot loader. What he really has is one that has all of the files in it. Then he has two separate file system trees which contain hard links to those actual files.
When you do an Atomic update, you're only updating one of those trees. You're running the other one. But because the one you're running remains unchanged while you're doing an update, you can reboot forward to your new environment. If that fails, you can reboot back to your old one which hasn't been modified.
Gordon:  That's really quite a change from what you've traditionally done because you build up an operating system, and you don’t have much choice but to just wipe everything clean and start over again, which you couldn’t easily do because all your applications were in there and all your customizations, and it was really hard to do this kind of thing.
Mark:  If you break your system rolling forward, it leaves you with a few very limited choices, which were things like rebuild the machine from kickstart and configuration management. Atomic allows you to do it in a much cleaner fashion and with a lot more assurance that you're going to get what you expect.
Gordon:  The other point we probably ought to highlight with respect to Atomic is that this is built from Red Hat Enterprise Linux 7. All of the certifications, the hardware certifications and other types of certifications, and support mechanisms and everything associated with RHEL 7 still apply to Atomic Host. You get all these benefits you're talking about, but it's still the RHEL that you know and love.
Mark:  If you logged into one, you wouldn't be able to tell it's not RHEL, unless you know where to look to find the Atomic label. If you log into one as a user, the only thing you're going to notice is that there are very few actual user accounts because all the applications run in a container. There's no need for lots of special user accounts.
You log in as root and you do your work, and usually you'll use some orchestration or distributed control system to actually start and stop the containers. It looks like RHEL. It is RHEL.
Gordon:  From the orchestration perspective, it actually includes Kubernetes which is the framework for managing clusters of containers. We've been collaborating with Google on this.
Mark:  Kubernetes is a work in progress. Google is working with us developing, as you call it, an orchestration system. It's a way for you to, instead of saying what you want to do, saying, "Start this Docker command," or, "Stop this Docker command," you get to decide, "I want to build an application with a database and a Web server and a messaging server."
You describe all this and say, "Go," and Kubernetes makes it happen. You don't have to worry about the actual placement.
Gordon:  One of the really interesting things, from my perspective of Atomic, is we are getting into all of this cool, new containerized type of stuff. We're getting the container portability across hybrid cloud deployments, differentphysical hardware, certified hypervisors, and so forth, public cloud providers like Amazon.
But, at the same time, from a sys admin perspective, this is not really a radical change to how they've conventionally done things.
Mark:  There are ways in which it's the same. The hosts are going to be deployed using PXE or some kind of install‑to‑disc mechanism. The user management of those hosts will probably be very similar. There actually is going to be some significant change and some significant learning in how to use these things and where the boundaries are.
That's one of the things that's going to change is where the boundaries are between the admins and the users, the application developers, the operations people. I think that's going to settle out. The boundaries are going to shift. It might not come out the same way it would three years ago.
Gordon:  That is a fair point. I was at a luncheon in New York City before the holidays, and we were having a discussion about containers. One of the points that really came out of the discussion that I think is an important one is you can use containers to look like a slightly different, maybe a little less isolated, a little bit more efficient version of server virtualization.
But, and I think this is really the key point, using containers most efficiently really requires thinking about applications, application development, and application architectures in a lot of different ways.
Mark:  One of the things I've hit so far, repeatedly, and I didn't really expect it at first because I'd already gotten myself immersed in this was that everybody's first response when they say, "Oh, we're going to move our application to containers," is that they're thinking of their application as the database, the Web server, the communications pieces, the storage.
They're like, "Well, we'll take that and we'll put it all in one container because we're used to putting it all on one host or all in one virtual machine. That'll be the simplest way to start leveraging containers." In every case, it takes several days to a week or two for the people looking at it to suddenly realize that it's really important to start thinking about decomposition, to start thinking about their application as a set of components rather than as a unit.
That's one of the places where the orchestration models are going to come in because they're going to allow you to, first, decompose your application from the traditional model, and then recompose it and still treat it as an application, but now using these container components.
Gordon:  One of the folks I was having lunch with, I forget who it was actually, but I told him I was going to steal this term of his. He referred to server virtualization as "skeuomorphic virtualization." What he meant by that was that when server virtualization really came in, one of the reasons it was so successful was that it made physical servers better utilized, and therefore more cost effective.
But, by and large to a first approximation, it didn't change the whole operational and management model of servers. As you say, you can, in principle, use containers the same way. In fact, service providers pretty much have done that. It's a more efficient form of virtualization.
The reason everyone's so excited here, and the reason we're having this series of podcasts, is that it enables things like DevOps. It enables new operational models. It enables new application architectures.
Mark:  The last one is really the interesting one. I like the skeuomorphic metaphor because the reason virtualized operating systems, virtualized hardware was adopted so easily is because everybody went, "Oh. Oh, I get it. That's just like my hardware. Once I get past the first piece."
Containers really aren't. Containers really are a little different. To get the best advantage from them, it's going to take a little bit different thinking along the way. One of the things, the Holy Grail of software development has been the idea of hardware store of objects where you could walk down the aisles of your hardware store, pick up a hammer and a bunch of plywood and two‑by‑fours, and build something.
All of those things were standardized. You'd have all the standard plumbing, heating, and whatever. All of the efforts so far have failed to some degree or other. You look at object‑oriented programming. People thought, "Oh, this is going to completely change the way we program." It's had some effect, but it hasn't had the effect of, "Oh, this is just a hardware store where I go in and pick the components I want and it all works."
Containerization, I don't know if it's going to be successful, but I think it has more promise than the previous ones did. That you can create a database container that is generic enough that it only exposes the relevant variables and that somebody can come along, once they have a certified one, and say, "I would like a database container.I need to give the five variables to the consumer of the container and initialize the database, and I'm ready to go."
The component will be reusable to the point where the user no longer has to really think beyond, "Here are my inputs."
Gordon:  Great, Mark. For those listeners who want to take a look at this, as I said at the beginning, the Red Hat Enterprise Linux 7 Atomic Host is available in beta. It's on both Amazon Web Services and the Google Compute platform. If you want to take a look at the upstream project, that is Project Atomic and links to all that stuff will be in the show notes.

This gets us off to a great start in the New Year. We're going to be talking much more about these and related topics in upcoming podcasts. Thank you, everyone. Thank you, Mark.

Tuesday, December 16, 2014

Links for 12-16-2014

Monday, December 15, 2014

Photo: Start of winter hiking season

Led an AMC group hiking weekend up to Pinkham Notch in New Hampshire this weekend. Lots of snow!

Thursday, December 11, 2014

Links for 12-11-2014

Podcast: The layers of containers with Red Hat's Mark Lamourine


Mark Lamourine and I continue our ongoing containerization series. In this entry, we break down the containerization stack from the Linux kernel to container packaging to orchestration across multiple hosts and talk about where containers are going.
This will probably wrap-up 2014 for the Cloudy Chat podcast. Among other goals for 2015, I'd like to continue this series with Mark on a biweekly basis or so. I also expect to start folding in DevOps as a way of discussing the ways to take advantage of containerized infrastructures.

Listen to MP3 (0:22:30)
Listen to OGG (0:22:30)

[Transcript]

Gordon Haff:  I'm sitting here with Mark Lamourine, who's a co‑worker of mine, again today. One of our plans for this coming year is I'm going to be inviting Mark onto this show more frequently because Mark's doing a lot of work around the integration of containers, the integration around microservices, or open hybrid cloud platforms. A lot of interest in these topics, and some of the other technologies and trends that intersect with them.
We're going to spend a fair bit of time next year diving down into some of the details. One of the things as we dive down in all these details is we're not going to get into the ABC basics every week, but I'm going to make sure to put some links in the show notes.
If you like what you hear, but want to learn a little bit more about some of the foundational aspects or some of the basics, just go to my blog and look at the show notes. That should have lots of good pointers for you.
Finally, last piece of housekeeping for today, we're going to be talking about the future of containers. There's been some, shall we say, interesting news around containers this week. But we're going to stay focused on this podcast from a customer, a user, a consumer of containers perspective, looking at where they're going to be going, where they might want to be paying attention over the next, let's say, 6 to 12 months type of time frame.
We don't want to get into a lot of inside baseball, inside the beltways sort of politics about what's going on with different companies and personalities, and really we'll stay focused on things from a technology perspective. That's my intro. Welcome, Mark
Mark Lamourine:  Thank you.
Gordon:  Mark, I think most of the listeners here appreciate essentially what containers are, at a high level. Operating system virtualization, the ability to run multiple workloads, multiple applications, within a single instance of an operating system, within a single kernel. But that's, if you would, the first layer of the onion.
What I'd like in this show, as we're talking about where we are today, and where we're going in the future, is to talk a bit more about the different pieces of containers, the different aspects of containers.
The first aspect I'd like to talk about is the foundational elements. What's in the kernel of the operating system. This is kernel space stuff. So could you talk about that?
Mark:  We've discussed before that the initial technology, the enabling technology, which in this case is kernel namespaces, that there have been things like this before in the past. Essentially what they do is allow someone to give a process a different view of the operating system.
They operate when a kernel, when a process asks for, "Where am I in the file system?" The name spaces can say, "Oh, you're at slash," or, "You're at something," and the answer they're getting is a little bit different from what you'd see outside the container. That's really the core technology: the idea of an altered view of the operating system from the point of view of the contained process.
Gordon:  Now there are some different philosophies out there about exactly how you go about doing this from a process perspective.
Mark:  Not so much the technology, but what do you with it once you've got it? How do you mount file systems? What views are useful? How do you build up a special view for a process which is this thing inside a container? There are different ways of doing that and people have different goals. That informs how they want to build one.
Gordon:  I think although this part of containers, this aspect of containers is often hidden, I think it's important to note it's a pretty important part of the entire subsystem because everything else is resting on top of it.
We've some news stories recently, for example, about how, if you don't have this consistency among kernel levels, it's hard to have portability between environments of the applications in a container.
Mark:  How you look at that view, how you compose that view is one element that's interesting and can be different, but you want to make sure that they're providing uniformly so everybody knows what they're getting. One important aspect of that is that these views, they're different views. There's the view that the PIDs can see, that the processes can.
What other processes are available? That's one possible view. There's a view of the file system that each process can see the file system from a different way or they can share one which gives two processes the same view of the file system, but maybe a different process.
This composition is something that people are still working out, how an application would want to see things that have multiple processes with different responsibilities and how do you build up the environment for each one?
Gordon:  That's the foundational container part, which is part of the operating system, depends on a lot of operating system services. It depends on a lot of things the operating system does for security, for resource isolation, that type of stuff.
Now let's talk about the image that is going to make use of that container. As we were talking before this podcast, from your perspective, there are really two particular aspects of images ‑‑ the spec of the image and the instantiation, the actual deployed image.
Let's first talk about the spec of the image and what are some of the things, the characteristics that you see as being important there now and moving forward.
Mark:  Again, uniformity is probably the biggest one. The big container system right now is Docker and Docker has a very concise way of describing what should go into a container. The specification is very small and that's one of the things that Docker has brought and made people realize that this is important.
Prior to using something like Docker, describing a container was very difficult and very time‑consuming and it required expert knowledge. With the realization that you need some kind of concise specification and that you can make some assumptions, containers have become easier to build, and that's really what's instigated the rise of containers in the marketplace.
Gordon:  Let's talk about the other aspect of containers, the instantiation, the payload, the actual instance, if you would. What are some of the trends you see happening there?
Mark:  Again, Docker was kind of the inception. The assumption they made was that you can take this specification, create a series of reusable layers to build up the image. But they specified that they were a tar ball.
Mostly they established a standard, and once that standard is there, people can just stop thinking about it and they can just go on and start working with it. That uniformity of whatever the composed piece is going to really important going forward.
Gordon:  However, that's not necessarily tied into all the other aspects of a container subsystem. That spec, that format can really exist independently of other pieces of technology, and that's probably going to be kind of a theme that we hit a few times in this podcast.
Mark:  At each place you want to have a uniformity, but like you said, that doesn't preclude having a different way of specifying what goes in ‑‑ just that once you've specified it it's got to have a form that other people can accept. The same thing is true with the image format itself.
Once that's there, how it gets instantiated on the far machine, as long as the behavior is the same. That really gets the job done. That allows people to focus on the job they need to do and not a lot of extra work putting everything together.
Gordon:  This always was the conflict with standards at some level. Standards are always great from the point of view of the customer and they really have enormous value in terms of portability, in terms of just not having to think about certain things.
On the other hand, they need to embody the functionality that's needed to get the job done. We don't use parallel printer cables any longer, thank God, because there are standards, certainly, but they're also not very useful in today's world.
Mark:  Yeah, I've said before that probably one of the biggest things that Docker did was to make a certain set of assumptions, and to live with those assumptions, those simplifying assumptions.
That allowed them to get on with the work of building something that was functional. I think that the assumptions are going to be challenged. There are going to be places where their assumptions are too tight for some kinds of uses.
I think the community is going to inform that and the community is going to say, "This is something we need to expand on it." Without a different assumption or without the ability to control those assumptions, we can't really move forward. There are a number of different responses in the market to that.
Gordon:  This is how successful open source projects work. You have a community. You have members of that community with specific needs. If the project as it exists doesn't meet those needs, they need to argue, they need to contribute, they need to convince other people that their ideas, the things they need are really important to the project as a whole.
Of course, there need to be mechanisms in place in that project to have that wide range of contributions.
Mark:  In any good open source project, you get that right from the beginning. The assumption by the authors is, we've got a good idea here or I think I've got a good idea here and I'm going to instantiate it. I'm going to create it and make it the way I think it needs to be.
Then I'm going to accept feedback, because people are going to want to do things with it. Once they see something's neat, they're going to want to say, "Yeah, that's exactly what I want. Only it would be better if I had this too."
Gordon:  Let's talk about the repositories, the ecosystems. You talked about this a little bit last time, but where are we now and what are the next steps? What needs to be done here?
Mark:  Again, returning to Docker, another one of their simplifying assumptions was the creation of this centralized repository of images. That allowed people to get started really quickly. One of the things that people found when they started looking at their enterprise, though, was that it was a public space.
What we need to go forward is we need the ability to know where images come from. Right now things are just thrown out into space, and when you pull something down you don't know where it came from. I don't think there's anybody who really thinks that that's the ideal in the end.
I think to go forward with it, the community needs to build mechanisms where someone who builds a new container image can sign it, can verify that it comes from the person who claims that they built it, and that it has only the parts that were specified and that it gets put out in a public place if it's intended to be public, so that people can be assured that it meets all their requirements and that it's not something malicious.
On the flip side you get companies where they're going to say, "No, I don't want to put this in a public space." There needs to be some private repository mechanism where a company can develop their own images, place them where they can reach them, and retrieve them and use them in ways that they want without exposing it to the public.
Gordon:  Again, this is another example of, there's not just going to be just one way of doing things, because there's a lot of legitimate different requirements out there.
Mark:  There are different environments, although I think there's probably a limited number that we'll find over time. I don't think it's completely open. I think there are a limited number of environments and uses that will fall out over time as people explore how they want to use it.
Gordon:  Finally, let's talk about and again, you touched on some of this during our last podcast, but the orchestration and scheduling piece, which is another piece that I think we sometimes tend to think of as just part of this container subsystem.
In fact we're pretty early in the container concept and we're really still developing how these pieces fit with and complement the lower‑level container function.
Mark:  The whole technology started off with, "Let's build something that runs one." It's actually working out really nicely that as people start using containers, they're kind of naturally backing into bigger and bigger spaces.
They start off going, "Oh, this is really cool. I can run a container on my box that can either run a command I want or I can build a small application using a database and a web server container and I can just push my content into it and it goes."
And people are going, "That's great. Now, how do I do 12?" Or companies are looking at it and saying, "Here's an opportunity. If I can make it so other people can do this, I can sell that service, but I have to enable it for lots of people." I think we're backing into this growing environment that orchestration is going to fill.
I think there's still a lot of work to be done with the orchestration right now. The various orchestration mechanisms, they're not really finished. There are pieces that are still unclear ‑‑ how to manage storage between containers, and a big one is, in a container farm, in an orchestrated container farm, how do you provide network access from the outside?
A lot of work has gone into making it so the containers can communicate with each other, but they're not very useful for most cases until you can reach in from the outside and get information out of them. That requires a software‑defined network, which, if you follow the OpenStack model, they have these things.
That's actually still one of the most difficult problems within OpenStack. I think if you ask people about the three iterations of software‑defined networks within OpenStack, you're going to find that they're still working out the problems with that and OpenStack is four or five years older than any of the container systems are.
Gordon:  One of the things that strikes me when I go to events like LinuxCon and CloudOpen and other types of particularly open source‑oriented industry events is that there's a lot of different work, in many cases addressing different use cases, whether it's Twitter's use cases or Facebook's use cases or some enterprise use case or Google.
There're all these different projects that are being integrated together in different ways, and the thing that strikes me is first of all, wow, there's a lot of smart people working in this stuff out there. But b) we're nowhere ready to say, "This is the golden path to container orchestration now and forever."
Mark:  I would be really surprised if we found that there ever was one golden way. I suspect in the same way that we've got different environments for different uses, you'll find that there are small‑scale orchestration systems that are great for a small shop, and then you're going to get large enterprise systems.
I can guarantee that whatever Google uses in the next five years is going to be something that I probably wouldn't want to install in my house.
Gordon:  Or your phone.
Mark:  Or my phone, yeah. The different scales are going to have very different patterns for use and very different characteristics. I think that there's room in each of those environments to fill it.
Gordon:  Sort of a related theme ‑‑ what I'm going to simplistically call provisioning tools. I was just having a discussion yesterday. You've got Puppet, you've got Chef, you've got Ansible, you've got Salt.
Certainly there're adherents and detractors for all of them and they're at various different points in their maturity cycles, but the other thing that strikes me is there's also a very clear affinity between certain groups of users, like developers or sys admins towards one tool rather than another, because they're really not just the same thing.
Mark:  They're not, and I thought it was interesting that you used the term "provisioning tool" when talking about Puppet and Chef, because that is the way in which people are starting to use it now, where five years ago they would have called it a configuration management tool and the focus wouldn't have been on the initial setup, although that's important. It would have been on long‑term state management.
That's one of the places where containers are going to change how people think about this work, because I think the focus is going to be more on the initial setup and short‑term life of software rather than the traditional ‑‑ actually someone told me to use the word "conventional," although in this case "traditional" might make sense.
The traditional "Set it up and maintain it for a long period of time." Your point about people having different tools for different perspectives is true. I also want to point out that all of these things, even while they're under development, they have use. You might claim that Puppet and Chef and these various things, the configuration management or the provisioning or the container market are evolving.
But at the same time, they're in use. People are getting work out of them right now. People are getting work out of containers now, as much as we're talking about the long‑term aspects, people are using containers now for real work.
Gordon:  Gardner has this idea they call bimodal IT and they have this traditional IT, conventional IT, whatever you want to call it, either you have these “pets” type system. The system runs for a long time. If the application gets sick you try and nurse it back to health.
You do remediation in the running system for security patches, and other types of batches and the like. Then you have this fast IT and the idea there is I've got these relatively short lived systems. If something's wrong with it, it takes what, half a second to spit up a new container. Why on earth would I bother nursing it back to health?
Mark:  I think this is another case where perspective is going to be really important. If you're a PaaS or an IaaS shop, the individual pieces to you are cattle. You don't really care. You've got hundreds, thousands, hundreds of thousands of them out there, and one of them dropping off isn't all that big a deal.
But if you're a PaaS situation, you're cattle is somebody else's pet, and it's going to be really important to either keep this cattle alive, the individual ones, because, to someone, it's really their most important thing. Or, to help them find ways so that they can treat it like a pet while you treat it like cattle.
Where they say, "I want my special application," and you spin up automatically two or three redundant systems so that you see the pieces dying, you kill them off, you restart them, but the user doesn't see that. They shouldn't have to manage it.
Gordon:  To pick Netflix as a much overused example. Obviously, Netflix delivering movies to you as a consumer, that's type the cattle at one level. You lose your ability to watch Orange is the New Black or whatever and you're going to be unhappy.
From Netflix point of view, if you're unhappy, they're unhappy, but the individual micro services are very explicitly designed so that they can individually fail.
Mark:  This is what I was saying that they need to be able to treat it both ways. I don't know, but I suspect that when you're watching your movie, if the server which is feeding it dies, what Netflix sees is, "Oh, something died. Start a new one." What you see is maybe a few seconds glitch in your move, but it comes back.
Mostly, they're reliable. If that's true, then they've managed to do what I was saying. They've managed to make it so that they preserved the important pet information for you somehow. It might be on your client side, but the cattle part of it is still, "Get rid of it and start again."
Gordon:  Well, Mark, this has been a great conversation. We've probably gone on long enough today. But, as I said at the beginning, we're going to continue this as a series going into the New Year because there is a lot happening here, and nobody has all the answers today.
Mark:  That's for sure.

Gordon:  Hey, thanks everyone. Have great holidays.