Friday, June 19, 2015

Where's podcasting on the hype cycle?

Says Farad Manjoo at The New York Times:

So don’t call podcasting a bubble or a bust. Instead, it is that rarest thing in the technology industry: a slow, steady and unrelentingly persistent digital tortoise that could eventually — but who really knows? — slay the analog behemoths in its path.

Podcastingpic

I’m not sure about the ultimate result, but steady progress in podcasts seems right to me. (And, while a possible bubble in podcasting ad rates is a legitimate and serious question for those directly involved, I’m not sure it’s a terribly important factor in the overall development of the medium.)

Here are my own experiences and observations.

The hype around podcasts initially was enormous—THEY’RE GOING TO KILL RADIO OMG! Yet, a lot of that first wave of podcasts was pretty horrible and self-indulgent. LISTEN TO ME! I’M PODCASTING AS I DRIVE TO THE GROCERY STORE! At the same time, it was a laborious (or at least rather manual) process to get podcasts onto an iPod. Pre-smartphone, syncing things to a mobile device was a pain. It’s why I, like many others, got tired of trying to sync that gadget of the moment, the Palm Pilot, to our calendars in a previous technology generation.

In no particular order, here are a few ways in which today is different.

We have smartphones. While I often find syncing isn’t as automagical as I might like it, it’s really not bad—especially if you’re somewhere that has cell coverage.

Even leaving aside the enormous popularity of a show like “Serial,” there’s a solid line-up of professionally-produced shows from a wide range of outlets. NPR shows have some of the most consistently high production values overall but...

Even leaving aside those shows assembled by a professional staff, there’s a solid stable of podcasts with clean sound and interesting content, whether niche or of more general interest. Interviews are one common format but hardly the only one. I think it’s fair to say that we’ve collectively gained enough experience with the format that a lot of people have learned how to put together engaging episodes without investing thousands of dollars in gear or spending way too much time in post-production. 

There’s probably still an ongoing shift toward more people consuming multimedia, rather than printed, content. Video gets more of the press here. But I’m at least willing to listen to the argument that the amount of listening may be increasing at the expense of reading.

Another trend that would clearly seem to apply here is a secular shift from listening (and watching) whatever happens to be on to deliberate on-demand choice. (And decreasing incremental effort associated with on-demand—think DVRs vs. VCRs—will only reinforce this change.) 

Don't Skeumorph your Containers from DevOps Summit

Here's my presentation from DevOps Summit in NYC last week. When I get a chance I'll put up some sort of video but this should work as a placeholder until then.

Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as applying more broadly, to applying existing patterns to new technologies that, in fact, cry out for new approaches. In this session, Red Hat’s Gordon Haff will discuss why containers should be paired with new architectural practices such as microservices rather than mimicking legacy sever virtualization workflows and architectures.

It’s far more fruitful and useful to approach containers as something fundamentally new and enabling that’s part and parcel of an environment including containerized operating systems, container packaging systems, container orchestration like Kubernetes, DevOps continuous integration and deployment practices, microservices architectures, “cattle” workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source as part of a new platform for cloud apps.

More discussion of containers at redhat.com.

Tuesday, June 16, 2015

Open source, turbocharged pace, and cloud

As usual, Adrian Cockcroft has smart things to say in this interview. The whole thing is worth reading but this comment on the pace of change particularly struck me for a couple of reasons.

And we’ve got now a very open-source world that’s moving extremely quickly. Although it’s not strictly cloud as such, the Docker ecosystem is one of the fastest-moving environments that we’ve ever seen. It’s unprecedented how fast it’s moving. About once a week there’s a seismic event where they change it; a Nepal earthquake-size thing happens on a weekly basis, where you have to say, ‘Okay, everything you knew is slightly different.’ So just trying to track what’s going on in that ecosystem is more than a full-time job. And it’s confusing. It’s also very interesting, and the ability to get things done in that ecosystem is evolving extremely quickly. If you say, ‘I can’t do this thing with Docker,’ you’ve got to time-stamp that. Because maybe next week you can, or maybe in a month everyone’s doing it. Things that normally take years are taking more like months.

The first reason is that Adrian is, of course, absolutely spot-on about how quickly things are changing. I’m a bit embarrassed to admit that the cloud book I published just a couple of years ago now has major holes in the topics that it covers. While many of the basic concepts and their historical antecedents remain valid, containers (to choose the most glaring example) are wholly absent along with all the associated packaging and orchestration work in projects like Docker and Kubernetes. While "It’s the future" is mostly intended to be humorous, it also makes a certain serious point about the rapid swizzling and roiling of software stacks happening today.

Adrian also observes that all of this is happening within a largely open source environment. I’d argue that the rate of experimentation and advance wouldn’t be remotely possible otherwise. All the things coming together around containers and hybrid clouds from DevOps to microservices to internet-of-things to platform-as-a-service are fundamentally made possible by the rapid innovation and ability to recombine software that open source makes possible. It’s no coincidence that we’re seeing this Cambrian explosion taking place in an increasingly open source-anchored and dominated computing world.

Podcast: Nulecule and Microservices with Red Hat's Mark Lamourine

In spite of their popularity at some Web "unicorns" like Netflix, microservices are still in their infancy. In this podcast, Red Hat's Mark Lamourine shares his experiences to date with microservices in addition to offering his take on recent discussions about the best way to get started.




Listen to MP3 (0:18:06)
Listen to OGG (0:18:06)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff with Red Hat. Welcome to another edition of the Cloudy Chat Podcast. I am here with my colleague, Mark Lamourine, who you've heard before on these podcasts if you're a regular listener.
Today we're going to get back into microservices because Mark has been talking to a lot of people and doing some exploratory work in microservices. There was also an interesting blog post that came out recently from Martin Fowler who many of you probably know. He's written a lot about microservices. It's an interesting topic to discuss.
We are also (by a path to be determined as we go through this podcast) then going to talk more about Nulecule, about which I produced a podcast with John Mark Walker a few weeks ago.
Mark is going to dive into a little more of the technical details of Nulecule and some of the ways that Nulecule relates to other projects out there, and other things that we've talked about on previous podcasts. Microservices. What have you been learning about?
Mark Lamourine:  Actually I've been helping our marketing department talk about our point‑of‑view [around microservices]. In those discussions I'm learning some of the misconceptions people have and some of the ideas they're floating.
The concept at its highest level is pretty simple, that you take tiny components and you create them in a way that makes them very atomic and mobile. Then you use them to recompose bigger services. We've been doing that with host‑based stuff for a very long time. You put the Web server and you put the database server on the box.
In some senses you can treat each of those conceptually as a microservice. The new idea is that you don't need to put them on the same host. You can use containers. You can use VMs. You can use various other things so that these really become composable in a way that a host‑based system hasn't been until now.
Gordon:  You've been doing a lot of talking about them. Has your thinking changed at all?
Mark:  Most of the thinking that's changed has been around trying to formulate ways to express how microservices are supposed to be used. We're still early enough in it in the same way that we are with containers, that people go off with fairly simple concepts of what a microservice architecture means.
But people are misled by hype. They might jump into something that maybe they're not ready for, or maybe they dismiss it because they say, "Well, I've heard about that before." Then I think a certain amount of education is necessary for people to be able to make good judgments about when a microservice is appropriate and when it's not.
Gordon:  This is probably a good point to jump into talking about the Martin Fowler post recently. The simplistic headlines—and headlines are nothing if not simplistic--had him saying, "Oh, just build a big monolithic application to start with, and break things apart down the road." That's not really what he said.
Mark:  It was his headline. It was the hook that got people to go look. If you read the article more carefully, what you find is that he's repeating things that have been said to him. He's repeating observations that other people have made and that he's made to a certain degree when looking at both successful and unsuccessful new architectures.
That people have been more successful so far pulling apart monolithic systems, and re‑implementing them as microservices than they have been at creating new microservice systems from scratch.
Gordon:  That's partly because it's hard to figure out what service boundaries are, for example.
Mark:  I suspect that there are a lot of reasons. If you follow the article, one of the things he says is he's not quite sure why this is. The observation is there. Why? He leaves it "We don't really have any deep data." This is really fairly anecdotal, and it's his impression.
A couple people have commented that his impression is going to be shaped by what he does. That's his job. His history is going in and re‑factoring things that have been built a certain way and have problems. He looks at them and he figures out how to bring sanity to them. That fits the microservice model.
The jury is still out on whether people understand the characteristics of microservices well enough to start from scratch building the microservice architecture. I have my suspicions about why. My personal suspicion about why people would have problems with that is a bit of hammer‑and‑nail psychology.
If you're a new adopter of microservices, you're going to try to make everything a microservice, and that might not be the appropriate thing to do if you have two or three things that in fact work together tightly bound.
Gordon:  Certainly there are organizations that have really, really gone all in on microservices, the kind of thing that's Amazon. "Everything will have a public API or you'll be fired" or the Netflix approach.
I suspect you get under the covers of those organizations, and even there probably everything isn't quite as purist perfect as the public perception is. However, they really had the opportunity to start with largely clean sheets of paper.
Mark:  That's true. I'd have to say if I were to go looking at those and find the places where the purity hasn't been maintained, if that's the right thing to do then that was probably the right thing to do. That was the point I was making earlier.
When we started with object‑oriented programming, suddenly people who thought that object‑oriented programming was the most wonderful thing, everything became an object. We got decomposition nightmares of everything is an object down to creating your own int type just because, "Well, it should be an object in languages like C++."
That's what I meant by the inappropriate use. That's what I'm afraid of in some of the microservices' first attempts. I suspect people at Amazon fairly quickly started learning where the boundaries of appropriateness were. If you go look, I suspect you'd find that.
Gordon:  The fact is that with first system you always need that to learn. Of course then what you do is you build the second system, and you throw too much new stuff in. Nonetheless you still need a learning experience in some way.
Mark:  If you follow Fowler's article all the way to the end and you look at his last paragraph, that's essentially what he says. He says, "This is really still too young." These are wonderful observations.
These are important observations, but this field is still really too young for anybody to be making strong claims because there's just not enough data yet to support strong claims either way about what's the best approach when you're starting from scratch.
Gordon:  Just to throw in one last comment in the programming‑language front, you mention C++. Obviously there's an aspect of object in most modern programming language at this point.
On the other hand it's probably worth noting that some of the more academic really ultra‑pure everything‑has‑to‑be‑an‑object languages that were out there in some of the early days, most of those probably can't even remember their names any longer.
Mark:  I can, but [laughs] I don't think most other people can. You don't see them in common use anymore. You don't see Oberon in common use. You don't see Modula‑3 in common use. You don't see Smalltalk in common use. That's not to say they're not used in production in some places. They're just not as widespread as, I have to say, more practical languages.
Gordon:  We've been talking about the things that run in containers, for example, so the development side of things. Let's switch gears and talk about what's going on with the management and orchestration of those containers.
As I said I had John Mark Walker on here two or three weeks ago introducing the Nulecule specification. Let me make one thing clear at the beginning as I think this has sometimes caused a little bit of confusion over the last couple weeks.
Nulecule is the spec, and Atomic App is the implementation of that spec, Atomic being this container‑based operating system that is optimized to deliver containers. Let me give you a shot at explaining a little bit about what the purpose of Nulecule is and how it relates to some of the other things out there.
Mark:  We've talked before about a lot of the pieces that we're building with respect to containers and various other relatively new available technologies. I see the Nulecule spec and the Atomic App implementation as yet one more layer. Docker itself is like the initial processes. You can create one. You can type a command, and you can create one. It's a great little process.
Kubernetes and OpenShift are ways of creating more than one of these things in an automated way and maybe distributing them across a platform. In both of those cases you're still managing them one at a time or maybe in small groups.
What Nulecule does is it adds the next layer of abstraction where, instead of describing individual processes and then how the individual processes communicate with each other, you describe instead a composed service.
I've been working specifically on one. I've been trying to implement the Pulp service as a purely containerized service without using any of the orchestration beyond Kubernetes so that when I get that nice and stable we can apply the Nulecule spec and turn it into an Atomic App as an initial demo.
I picked Pulp early because it's the smallest application I could think of that had all the elements that you needed. It has a database. It has a messaging service. It has storage. It has external networking. I've gotten that to the point where I can start that up at Kubernetes with a shell script.
I run the Kubernetes commands manually by a shell script. The parts are set up so that they self‑assemble. They don't require sequencing. Now that that's robust I'm going back to the Nulecule guys or the Atomic App guys and saying, "OK, how do we describe this as a single application?" That's really the point of Nulecule.
Nulecule is that next layer where you describe an application as a whole, and you hide the components again. You only provide the variables that you need to make each particular instance of the application.
Gordon:  Nulecule and the Atomic App actually makes use of container technology itself, right?
Mark:  It does in a way that actually surprised me when I understood it. I assumed when I was talking about scaling that you'd have a service like Kubernetes. Then you'd have another big uber‑service over the top of it.
When I understood what the Nulecule spec was, it took me aback. Rather than being some uber‑service, instead it's a way of expressing the complex relationships of your application, but you actually stuff that knowledge into another container.
Traditionally right now, tradition six months old, if you were going to create a complex app with Kubernetes, you'd create a bunch of pods. You'd tell them how to talk together. When you get to an Atomic App, you'll have one container. You'll say, "Start that container, and here are my variables."
It will talk to Kubernetes and act as the agent that causes all of those other components to come into existence. You get to treat your application as one container when in fact it's going to be running some large number of potentially scalable containers without a central service monitoring it or without the need for it.
You could layer that central service on as another scope if you want. Nulecule's an interesting term because it implies "Molecule," which is a composed thing of multiple atoms.
In that sense if the Atomic App is a molecule of your app where the database is one atom and the Web server is another atom and the messaging service is another atom and together they form some complex compound, that does something that you couldn't have imagined by looking at the pieces.
Gordon:  I did want to clarify again one other point here, with you mentioning Kubernetes. Kubernetes is the orchestration that we do specifically use in Atomic App. That's the orchestration provider of choice in Atomic App, but the Nulecule spec itself is actually orchestration agnostic. You can specific different providers there if you choose so.
Mark:  In fact there are three implemented now. Kubernetes is the one we're using most because we're also trying to beat up Kubernetes and learn more about it and then harden that. You can also just use plain Docker, or you can use OpenShift as the backend provider. The goal of the Nulecule spec is to allow you to hide that.
You pick which one. You specify your application, and you let the Atomic App manage whichever provider you ask for. If you use the Docker provider, it's going to call Docker commands directly. You have to run it on a single host. Essentially you get a stand‑alone Pulp service in this case.
If you use the OpenShift provider, you'd get one that runs in OpenShift. If you use the default Kubernetes one, you get one that Kubernetes manages the components.
The goal is that once you've created your Atomic App, if it's done well, all the user has to do is select the Atomic App, provide a few variables ‑‑ the host name for their service, what the public IP will be, and how to provide things like security. What are the keys, the non‑reusable parts? Then your service appears.
That's a bit rainbows and unicorns at the moment, but that's actually what we're shooting for. I don't think it's out of reach in the next few months.
Gordon:  This is really very consistent with the philosophy that we're taking in this whole space. If you're an enterprise and want to do DevOps, want to do actually development, want to enable developers with self‑service, you can just buy OpenShift.
Essentially that's taking care of an awful lot of the container management and application scalability using these various components for you.
If you do want to assemble your own homegrown implementation, although surveys we've been doing recently really do show people tending to move away from that and towards just getting a platform service that bundles all this stuff up, but you can certainly do it.
That's all open‑sourced. There's communities associated with all of these. If you want to roll your own and you want to combine different things in different ways, you can absolutely do so.
Mark:  As somebody who works on the community side a lot, I actually encourage that. I'd much rather see a rich community environment where people are experimenting with ideas, trying them out. Like with everything else, 80 percent of everything's going to get thrown away, but if you don't try you never get the other 20 percent. I want that other 20 percent.
I expect that we're going to see a lot of evolution of both products and concepts certainly over the next year or two and very likely over the next five years and decade. Science fiction is happening.

Which direction it'll go I can't predict, but I've got a few ideas myself about things that I think might happen. I can't make them happen yet because there's a big gap between what we can do now and what I can imagine. People are imagining, and this is a pretty cool time.

Thursday, June 11, 2015

Links for 06-11-2015

Thursday, June 04, 2015

A few thoughts from the MassTLC IoT Conference

Eric Fischer cc/Flickr

I can’t say I have any great revelations from the MassTLC’s IoT event held at Bentley yesterday but it was refreshingly non-commercial and helped to reinforce some of the themes I and others have been thinking about at Red Hat as we continue to refine our IoT planning and products. Here are a few points I took down in my notes:

What is IoT? I heard a variety of definitions over the course of the day, but the most common thread was probably around the idea of it being “the confluence of the physical world and the digital world coming together” in the words of Howard Heppelmann of PTC. If you think about it, much of the computing that we have historically done has been pretty data-poor. And you can’t optimize what you can’t see. But tiny, ultra low-power sensors (see e.g. RF energy-harvesting power management units) are going to proliferate and thereby create new value by creating new types of connections between data (think things like traffic route optimization and power management) and processes.

It really is about the data. There’s a hugely important intersection between IoT and Big Data (however one chooses to define both of those often nebulous terms). In fact, there were two separate panels on data during the day. I took away a couple of specific points. The first is that there seems to be a general consensus emerging that we’re talking about using data for two distinct purposes. The first is to take real-time action. “There’s something wrong with the engine; shut it down and/or place a service call.” The second is for retrospective analysis with the goal to learn from the past to prescribe new future behaviors.

Rob Purser of MathWorks explicitly defined a three-tier architecture with respect to data:

  • edge nodes: local embedded algorithms and data reduction
  • data aggregator: online analytics that are continuous and well-defined (you know what are looking for) and visualization/reporting
  • exploratory analysis: historical analytics (more ad hoc; almost forensic) and algorithm development
Rob also had one of my favorite quotes from the day: “Collecting data without analyzing it has negative ROI."

(At Red Hat, we’ve been referring to the data aggregator tier as a “gateway,” which is Intel’s terminology as well.)

IoT is multi-faceted. At this point, it’s worth highlighting an audience member comment. It was along the lines that you can’t really talk about IoT, its use of data, and the value that it delivers as a singular thing. I think that’s exactly right. So many IoT discussions end up back at refrigerators placing orders for drone-delivered bottles of milk when you start to run low. Admittedly, people might have had similar sentiments for things like TV remote controls back in the day but so many of the home IoT solutions much in the news seem like solutions in search of problems. They shouldn’t eclipse the very real value in a lot of businesses. (I suspect there’s opportunity in home power management but even that is much less interesting than in an industrial context.)

Security, identity management, and data ownership is far from solved. This general topic was poked at throughout the day but I didn’t come away with a lot of specifics. The broad issue is this: One of the visions for IoT is to take previously siloed data from widespread sensor arrays and combine it with other data to uncover new relationships and thereby optimize various types of processes and create new value. But that statement invites lots of questions? Are the end point devices themselves exposed to the public network and, if so, how are they secured and kept secured? What authentication mechanisms are necessary? Who owns various types of data and with whom is it shared? One example. In a few years, your car will know lots of things about its local traffic, weather conditions, even the road that it’s driving on. What’s the protocol for making that data available to others who could use it for genuinely useful things? (And is it appropriate to share that you’re also playing that embarrassing Britney Spears song?) And, by the way, governmental rules for all this may change when you drive from France to Germany.

(I’d also note that there was various grumbling on twitter after the MIT CIO Sloan Symposium a couple of weeks ago when the IoT panel at that conference basically punted on having any discussion of security at all.)

Standards aren’t either. For more on this topic, I encourage you to check out my colleague James Kirkland’s piece from earlier this year. The tl;dr is that a lot of good work is being done to solve different types of problems throughout the IoT solution space but there’s still a lot of fragmentation and it’s not clear we have a holistic view of all the pieces that are needed and of what should be standardized and what shouldn’t be. Obligatory xkcd

Networks need to adapt. The final point I’ll leave you with from the day is the observation that a gazillion little sensors won’t necessarily deal with a network that’s probably more optimized for watching Orange is the New Blackon Netflix at 7pm local time. As Kris Alexander of Akamai put it "now all of a sudden millions of things are sending small things more frequently. Networks weren’t really designed for this.” Chris Baker of Dyn added that "holding open connections isn’t a core competence of most home routers.” You need to think about the use case. How important is time to your use case? What happens if don’t check in? (If it’s a pacemaker, the answer might be different than if it’s your DVR.)

 

 

Links for 06-04-2015