This blog comments on a variety of technology news, trends, and products and how they connect. I'm in Red Hat's cloud product strategy group in my day job although I cover a broader set of topics here. This is a personal blog; the opinions are mine alone.
In spite of their popularity at some Web "unicorns" like Netflix, microservices are still in their infancy. In this podcast, Red Hat's Mark Lamourine shares his experiences to date with microservices in addition to offering his take on recent discussions about the best way to get started.
Haff: Hi, everyone. This is Gordon Haff with Red Hat. Welcome to
another edition of the Cloudy Chat Podcast. I am here with my colleague, Mark
Lamourine, who you've heard before on these podcasts if you're a regular
we're going to get back into microservices because Mark has been talking to a
lot of people and doing some exploratory work in microservices. There was also
an interesting blog post that came out recently from Martin Fowler who many of
you probably know. He's written a lot about microservices. It's an interesting
topic to discuss.
are also (by a path to be determined as we go through this podcast) then going
to talk more about Nulecule, about which I produced a podcast with John Mark
Walker a few weeks ago.
is going to dive into a little more of the technical details of Nulecule and
some of the ways that Nulecule relates to other projects out there, and other
things that we've talked about on previous podcasts. Microservices. What have
you been learning about?
Lamourine: Actually I've been helping our marketing department talk
about our point‑of‑view [around microservices]. In those discussions I'm
learning some of the misconceptions people have and some of the ideas they're
concept at its highest level is pretty simple, that you take tiny components
and you create them in a way that makes them very atomic and mobile. Then you
use them to recompose bigger services. We've been doing that with host‑based
stuff for a very long time. You put the Web server and you put the database
server on the box.
some senses you can treat each of those conceptually as a microservice. The new
idea is that you don't need to put them on the same host. You can use
containers. You can use VMs. You can use various other things so that these
really become composable in a way that a host‑based system hasn't been until
You've been doing a lot of talking about them. Has your thinking changed
Most of the thinking that's changed has been around trying to formulate
ways to express how microservices are supposed to be used. We're still early
enough in it in the same way that we are with containers, that people go off
with fairly simple concepts of what a microservice architecture means.
people are misled by hype. They might jump into something that maybe they're
not ready for, or maybe they dismiss it because they say, "Well, I've
heard about that before." Then I think a certain amount of education is
necessary for people to be able to make good judgments about when a microservice
is appropriate and when it's not.
This is probably a good point to jump into talking about the Martin
Fowler post recently. The simplistic headlines—and headlines are nothing if not
simplistic--had him saying, "Oh, just build a big monolithic application
to start with, and break things apart down the road." That's not really
what he said.
It was his headline. It was the hook that got people to go look. If you
read the article more carefully, what you find is that he's repeating things
that have been said to him. He's repeating observations that other people have
made and that he's made to a certain degree when looking at both successful and
unsuccessful new architectures.
people have been more successful so far pulling apart monolithic systems, and
re‑implementing them as microservices than they have been at creating new
microservice systems from scratch.
That's partly because it's hard to figure out what service boundaries
are, for example.
I suspect that there are a lot of reasons. If you follow the article, one
of the things he says is he's not quite sure why this is. The observation is
there. Why? He leaves it "We don't really have any deep data." This
is really fairly anecdotal, and it's his impression.
couple people have commented that his impression is going to be shaped by what
he does. That's his job. His history is going in and re‑factoring things that
have been built a certain way and have problems. He looks at them and he
figures out how to bring sanity to them. That fits the microservice model.
jury is still out on whether people understand the characteristics of
microservices well enough to start from scratch building the microservice
architecture. I have my suspicions about why. My personal suspicion about why
people would have problems with that is a bit of hammer‑and‑nail psychology.
you're a new adopter of microservices, you're going to try to make everything a
microservice, and that might not be the appropriate thing to do if you have two
or three things that in fact work together tightly bound.
Certainly there are organizations that have really, really gone all in on
microservices, the kind of thing that's Amazon. "Everything will have a
public API or you'll be fired" or the Netflix approach.
suspect you get under the covers of those organizations, and even there
probably everything isn't quite as purist perfect as the public perception is.
However, they really had the opportunity to start with largely clean sheets of
That's true. I'd have to say if I were to go looking at those and find
the places where the purity hasn't been maintained, if that's the right thing
to do then that was probably the right thing to do. That was the point I was
we started with object‑oriented programming, suddenly people who thought that
object‑oriented programming was the most wonderful thing, everything became an
object. We got decomposition nightmares of everything is an object down to
creating your own int type just because, "Well, it should be an object in
languages like C++."
what I meant by the inappropriate use. That's what I'm afraid of in some of the
microservices' first attempts. I suspect people at Amazon fairly quickly started
learning where the boundaries of appropriateness were. If you go look, I
suspect you'd find that.
The fact is that with first system you always need that to learn. Of
course then what you do is you build the second system, and you throw too much
new stuff in. Nonetheless you still need a learning experience in some way.
If you follow Fowler's article all the way to the end and you look at his
last paragraph, that's essentially what he says. He says, "This is really
still too young." These are wonderful observations.
are important observations, but this field is still really too young for
anybody to be making strong claims because there's just not enough data yet to
support strong claims either way about what's the best approach when you're
starting from scratch.
Just to throw in one last comment in the programming‑language front, you
mention C++. Obviously there's an aspect of object in most modern programming
language at this point.
the other hand it's probably worth noting that some of the more academic really
ultra‑pure everything‑has‑to‑be‑an‑object languages that were out there in some
of the early days, most of those probably can't even remember their names any
I can, but [laughs] I don't think most other people can. You don't see
them in common use anymore. You don't see Oberon in common use. You don't see
Modula‑3 in common use. You don't see Smalltalk in common use. That's not to
say they're not used in production in some places. They're just not as
widespread as, I have to say, more practical languages.
We've been talking about the things that run in containers, for example,
so the development side of things. Let's switch gears and talk about what's
going on with the management and orchestration of those containers.
I said I had John Mark Walker on here two or three weeks ago introducing the
Nulecule specification. Let me make one thing clear at the beginning as I think
this has sometimes caused a little bit of confusion over the last couple weeks.
is the spec, and Atomic App is the implementation of that spec, Atomic being
this container‑based operating system that is optimized to deliver containers.
Let me give you a shot at explaining a little bit about what the purpose of Nulecule
is and how it relates to some of the other things out there.
We've talked before about a lot of the pieces that we're building with
respect to containers and various other relatively new available technologies.
I see the Nulecule spec and the Atomic App implementation as yet one more
layer. Docker itself is like the initial processes. You can create one. You can
type a command, and you can create one. It's a great little process.
and OpenShift are ways of creating more than one of these things in an
automated way and maybe distributing them across a platform. In both of those
cases you're still managing them one at a time or maybe in small groups.
Nulecule does is it adds the next layer of abstraction where, instead of
describing individual processes and then how the individual processes
communicate with each other, you describe instead a composed service.
been working specifically on one. I've been trying to implement the Pulp
service as a purely containerized service without using any of the
orchestration beyond Kubernetes so that when I get that nice and stable we can
apply the Nulecule spec and turn it into an Atomic App as an initial demo.
picked Pulp early because it's the smallest application I could think of that had
all the elements that you needed. It has a database. It has a messaging
service. It has storage. It has external networking. I've gotten that to the
point where I can start that up at Kubernetes with a shell script.
run the Kubernetes commands manually by a shell script. The parts are set up so
that they self‑assemble. They don't require sequencing. Now that that's robust
I'm going back to the Nulecule guys or the Atomic App guys and saying,
"OK, how do we describe this as a single application?" That's really
the point of Nulecule.
is that next layer where you describe an application as a whole, and you hide
the components again. You only provide the variables that you need to make each
particular instance of the application.
Nulecule and the Atomic App actually makes use of container technology
It does in a way that actually surprised me when I understood it. I
assumed when I was talking about scaling that you'd have a service like
Kubernetes. Then you'd have another big uber‑service over the top of it.
I understood what the Nulecule spec was, it took me aback. Rather than being
some uber‑service, instead it's a way of expressing the complex relationships
of your application, but you actually stuff that knowledge into another
right now, tradition six months old, if you were going to create a complex app
with Kubernetes, you'd create a bunch of pods. You'd tell them how to talk
together. When you get to an Atomic App, you'll have one container. You'll say,
"Start that container, and here are my variables."
will talk to Kubernetes and act as the agent that causes all of those other
components to come into existence. You get to treat your application as one
container when in fact it's going to be running some large number of
potentially scalable containers without a central service monitoring it or
without the need for it.
could layer that central service on as another scope if you want. Nulecule's an
interesting term because it implies "Molecule," which is a composed
thing of multiple atoms.
that sense if the Atomic App is a molecule of your app where the database is
one atom and the Web server is another atom and the messaging service is
another atom and together they form some complex compound, that does something
that you couldn't have imagined by looking at the pieces.
I did want to clarify again one other point here, with you mentioning
Kubernetes. Kubernetes is the orchestration that we do specifically use in
Atomic App. That's the orchestration provider of choice in Atomic App, but the
Nulecule spec itself is actually orchestration agnostic. You can specific
different providers there if you choose so.
In fact there are three implemented now. Kubernetes is the one we're
using most because we're also trying to beat up Kubernetes and learn more about
it and then harden that. You can also just use plain Docker, or you can use
OpenShift as the backend provider. The goal of the Nulecule spec is to allow
you to hide that.
pick which one. You specify your application, and you let the Atomic App manage
whichever provider you ask for. If you use the Docker provider, it's going to
call Docker commands directly. You have to run it on a single host. Essentially
you get a stand‑alone Pulp service in this case.
you use the OpenShift provider, you'd get one that runs in OpenShift. If you
use the default Kubernetes one, you get one that Kubernetes manages the
goal is that once you've created your Atomic App, if it's done well, all the
user has to do is select the Atomic App, provide a few variables ‑‑ the host
name for their service, what the public IP will be, and how to provide things
like security. What are the keys, the non‑reusable parts? Then your service
a bit rainbows and unicorns at the moment, but that's actually what we're
shooting for. I don't think it's out of reach in the next few months.
This is really very consistent with the philosophy that we're taking in
this whole space. If you're an enterprise and want to do DevOps, want to do
actually development, want to enable developers with self‑service, you can just
that's taking care of an awful lot of the container management and application
scalability using these various components for you.
you do want to assemble your own homegrown implementation, although surveys
we've been doing recently really do show people tending to move away from that
and towards just getting a platform service that bundles all this stuff up, but
you can certainly do it.
all open‑sourced. There's communities associated with all of these. If you want
to roll your own and you want to combine different things in different ways,
you can absolutely do so.
As somebody who works on the community side a lot, I actually encourage
that. I'd much rather see a rich community environment where people are
experimenting with ideas, trying them out. Like with everything else, 80
percent of everything's going to get thrown away, but if you don't try you
never get the other 20 percent. I want that other 20 percent.
expect that we're going to see a lot of evolution of both products and concepts
certainly over the next year or two and very likely over the next five years
and decade. Science fiction is happening.
direction it'll go I can't predict, but I've got a few ideas myself about
things that I think might happen. I can't make them happen yet because there's
a big gap between what we can do now and what I can imagine. People are
imagining, and this is a pretty cool time.
I'm in the cloud product strategy group at Red Hat. Prior to Red Hat, I wrote hundreds of research notes, was frequently quoted in publications like The New York Times on a wide range of IT topics, and advised clients on product and marketing strategies. Earlier in my career, I was responsible for bringing a wide range of computer systems, from minicomputers to large UNIX servers, to market while at Data General. Among other hobbies, I do a lot of photography and enjoy the outdoors.