Friday, January 22, 2016

Book Review: Cooking for Geeks, Second Edition

As a single book, this combination of food science, recipes, equipment, ingredients, experimentation, interviews, and geeky humor is hard to beat. It’s not necessarily deep in all those areas, but it’s an appealing total package for those curious about the why’s of food. 

It’s the second edition of this book by Jeff Porter. At 488 pages, it’s about 50 pages longer than its predecessor. There are new interviews and graphics along with a fair bit of updating and rearranging from the prior edition—although the overall, look, feel, and organization aren’t a major departure. 

The book is written in a lighthearted and gently humorous way. Random sample from the intro to Taste, Smell, and Flavor: “You open your fridge and see pickles, strawberries, and tortillas. What do you do? You might answer: create a pickle/strawberry burrito. Or if you’re less adventurous, you might say: order pizza. But somewhere between making a gross-sounding burrito and ordering takeout is another option: figuring out the answer to one of life’s deeper questions: How do I know what goes together?” Probably not to everyone’s taste I realize, but it works for me.

It covers a broad swath of the science. The aforementioned tastes, smells, and flavors. Time and temperature—and what those mean for cooking proteins and melting fats. Additives like emulsifiers and thickening agents. Air, water, and leavening agents.It’s not the science tome that is Harold McGee’s On Food and Cooking, but it’s a more easily accessible presentation. (Though, if you read this book and enjoy it, by all means pick up McGee and vice versa.)

Cooking for Geeks at least touches on most of the major modernist cooking practices including sous vide and practical tips for same. Arguably, some of the DIY material around sous vide is a bit dated given the price drops of modern immersion circulators but this is about experimentation after all. (The link in the book does go to a list of current equipment options though.) There are also interviews with many of the usual suspects in that space such as Nathan Myhrvold and Dave Arnold.

Is this book for the cooking-curious geek who doesn’t have much real cooking experience? It could be but they might want to pair this book with another that was more focused on basic cooking techniques. The recipes here are relatively straightforward and the instructions are clear, but there’s not a lot of photography devoted to the recipes and the instructions for things like Béchamel Sauce are probably a bit bare-bones for a first-timer. 

I’d also generally note that the recipes are often there to provide examples of the science discussion. There isn’t a lot of discussion about why this specific recipe is being made with this specific set of techniques. For that sort of thing, I recommend book(s) from the America’s Test Kitchen empire, perhaps starting with their The New Best Recipes book—which also has the virtue of being a pretty comprehensive set of basic and not-so-basic recipes. It’s also rather sober and by-the-numbers, a much different experience. (Alton Brown also seems to have his followers in geeky circles although I’ve never personally been all that enthusiastic.)

One final point is that, for many, this is a book you will flip through and alight on a topic of interest. It’s not that you couldn’t read it from cover to cover, but the many sidebars and interviews and short chunks of material seem to encourage non-linear exploration. 

Bottom line: 5/5. Highly recommended for anyone with an interest in the science of cooking even if they don’t want to get into deep chemistry and physics.

Disclaimer: This book was provided to me as a review copy but this review represents my honest assessment.

Links for 01-22-2016

The new distributed application development platform: Breaking down silos

2200884398 7d9fd616a0 o

A document came out of Gaithersburg, Maryland in 2011. Published by the National Institute of Standards and Technology it was simply titled “The NIST Definition of Cloud Computing.” If you attended tech conferences during that period, reciting some version of that definition was pretty much a requirement. The private, public, and hybrid cloud terms were in this document. So were concepts like on-demand self-service and resource pooling. As were the familiar Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) service models. 

NIST didn’t invent any of this out of whole cloth. But by packaging up a sort of industry and government consensus about the basics of cloud computing, they regularized and standardized that consensus. And, overall, it worked pretty well. Iaas was about provisioning  fundamental computing resources like processing, storage, and networks. SaaS was about providing applications to consumers.

As for PaaS? PaaS was about applications created using programming languages, libraries, services, and tools supported by the provider. 

Arguably, this PaaS definition was never as neat as the others. IaaS resources were easy to understand; they were like the resources you have on a server, except cloudier. And SaaS was just an app on the Web—application service providers (ASPs) reimagined, if you would. PaaS was sort of everything that was above infrastructure but below an application an end-user could run directly. Cloud-enabled middleware, hooks to add features to a single online service like Salesforce.com, single-purpose hosted programming environments (as Google App Engine and Azure were initially), and open extensible environments like OpenShift that could also be installed on-premise. Most fell broadly under the PaaS rubric. 

The NIST definition also didn’t really capture how the nature of the service model depends upon the audience to an extent. Thus, Salesforce.com is primarily a SaaS as far as the end-user is concerned but it’s a PaaS in the context of developers extending a CRM application. 

Today, I’d argue that the lines NIST drew probably still have some practical significance but the distinctions are increasingly frayed. IaaS platforms have almost universally moved beyond simple infrastructure. OpenStack has compute (Nova), storage (Swift and Cinder), and Networking (Neutron) components but it also includes database projects (Trove), identity management (Keystone), and the Heat orchestration engine to launch composite cloud applications. 

In many cases these higher-level functions can be either used standalone or replaced/complemented by more comprehensive alternatives. For example, in a hybrid cloud environment, a cloud management platform like Red Hat CloudForms (ManageIQ is the upstream project) provides multi-cloud management and sophisticated policy controls. The IaaS+ term is sometimes used to capture this idea of more than base-level infrastructure but less than a comprehensive developer platform.

In the case of SaaS, today’s APIs everywhere world means that most things with a UI also can be accessed programmatically in various ways. In other words, they’re platforms—however limited in scope and however tied to a single application.

But, really, the fraying is broader than that. I’ve argued previously that we’re in the process of shifting toward a new style of distributed application infrastructure and of developing applications for that infrastructure. It won’t happen immediately—hence, Gartner’s bimodal IT model—but it will happen. In the process, traditional specialties/silos (depending upon your perspective) are breaking down. This is true whether you’re talking enterprise buyers/influencers, IT organizations, industry analysts, or vendors. 

As a result, it's hard to separate PaaS--in the relatively narrow sense that it was first discussed--with the broader idea of an application development platform with middleware integration,messaging, mobile, etc. services. Red Hat's doing a lot of work to bridge those two worlds. For example, Red Hat’s JBoss Middleware portfolio of libraries, services, frameworks, and tools is widely used by developers to build enterprise applications, integrate applications and data, and automate business processes. With JBoss xPaaS Services for OpenShift, these same capabilities are being offered integrated with OpenShift. This lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.

The advantage of the xPaaS approach is that it doesn’t merely put middleware into the cloud in its traditional form. Rather, it effectively reimagines enterprise application development to enable faster, easier, and less error-prone provisioning and configuration for a more productive developer experience.Eventually all of the JBoss Middleware products will have xPaaS variants. In each case, the core product is exactly the same whether used in a traditional on-premise manner or as xPaaS, so apps can be moved seamlessly between environments. In the xPaaS environment, JBoss Middleware developers experience benefits from OpenShift-based user interface enhancements, automated configuration, and a more consistent experience across different middleware products.

Then DevOps [1] comes along to blur even more lines because it brings in a whole other set of, often, open source tooling including CICD (e.g. Jenkins), automation and configuration management (e.g. Ansible), collaboration, testing, monitoring, etc. These are increasingly part of that new distributed application platform as well as is the culture around iteration and collaboration that DevOps requires.

I have trouble not looking at this breakdown of historical taxonomies as a positive. It offers the possibility of more complete and better integrated application development platforms and more effective processes to use those platforms. It’s not the old siloed world any longer.

[1] I just published this white paper that gives my/Red Hat’s view of DevOps.

Photo credit: Flickr/cc https://www.flickr.com/photos/timbodon/2200884398

 

Friday, January 15, 2016

Why bimodal is a useful model

24079824410 007ff0b066 k

You hear a lot about “bimodal” IT these days. Gartner’s generally considered to have coined that specific term but similar concepts have percolated up from a number of places. Whatever you call it, the basic idea is this:

You have Classic IT that’s about attributes like stability, robustness, cost-effectiveness, and vertical scale. These attributes come through a combination of infrastructure and carefully controlled change management processes. IT has classically operated like this and it works well. Financial trades and transfers execute with precision, reliability, speed, and accuracy. The traditional enterprise resource planning systems operate consistently and rarely have a significant failure.

By contrast, cloud-native IT emphasizes adaptability and agility. Infrastructure is software-defined and therefore programmable through APIs. Cloud-native applications running on OpenStack infrastructure are loosely-coupled and distributed. In many cases, the dominant application design paradigm will come to be microservices — reusable single-function components communicating throughlightweight interfaces.

The argument for taking an explicit bimodal approach is essentially two-fold. 

On the one hand, organizations have to embrace “go fast” cloud-native platforms and practices going forward if they’re going to be able to use IT to help strategically differentiate themselves. And they increasingly have to. Apps. Software services. Robust online customer service. The list goes on. 

On the other hand, for most organizations with existing IT investments, it’s not realistic—too expensive, too disruptive—to just call in the bulldozers and start over from scratch. Yet it’s equally impractical to just treat IT as a uniform “timid middle” based on practices and approaches too fast for traditional business systems but too slow for fast-paced, experimental innovation.

That said, the model has its critics. In my view, most of these criticisms come from misunderstanding (willfully or otherwise) what a thoughtful application of this model is really about. So I’m going to take you through some of these criticisms and explain why I think they’re off the mark. 

Bimodal IT treats traditional IT as legacy and sets it up for failure.

This critique is probably the most common one I hear and, in all fairness, it’s partly because some of the nuances of the bimodal model aren’t always obvious. Gartner, at least, has always been explicit that Mode 1 (classic) IT needs to be renovated and modernized. Here’s just one quote from CSPs' Digital Business Requires a Bimodal IT Transformation Strategy, October 2014: "Modifying the existing IT infrastructure for an effective and efficient use, while maintaining a reliable IT environment, requires CIOs to implement incremental IT modernization." 

Modernization is indeed key to make the model work. Another Gartner note DevOps is the Bimodal Bridge (April 2015) notes: "DevOps is often thought of as an approach only applicable to a Mode 2 or nonlinear style of IT behavior. Yet there are key parts or patterns of DevOps that are equally applicable to Mode 1 IT organizations that enable DevOps to be a bridge between the two IT approaches.” 

Incrementally upgrading platforms (e.g. proprietary Unix to Linux) and modernizing application development practices are essential elements of a bimodal approach.

Bimodal IT is a crutch for lazy CIOs

Related to the above, this argument goes that bimodal IT gives CIOs a license not to aggressively pursue cloud native initiatives on the grounds that they can just argue that most of their IT needs to remain in it’s go-slow form. At least, as John Willis has put it, “I think a lot of folk think that mode 1 is the wrong message… :-)” or “I think also most feel (like me) that Bi-modal is a get out of jail free card for bad process/culture…"

Those points are fair, at least up to a point. But, Dave Roberts also made some points in the discussion that largely reflect my thinking as well. He notes that “Most of it [issues with bimodal] seems predicated on piss-poor management practices, which if you have those you’re screwed anyway.” He adds “If you want to be lazy, you will find a way. But that’s true regardless of model."

At the end of the day, I think what we’re seeing to a certain degree here is a debate between pragmatists and those who place a higher priority on moving fast even if doing so breaks things. I’m inclined to align with the pragmatists while acknowledging that part of pragmatism is recognizing when circumstances require breakage over taking measured steps. To give Dave the final word: “Obviously, use the model wisely. If your market requires speed on all fronts, then you need Mode 2 everywhere."

Bimodal is too simple

This is essentially the opposite argument. Bimodal doesn’t capture the complexity of IT.

The critique may be precise. For example, Simon Wardley argues that "When it comes to organising then each component not only needs different aptitudes (e.g. engineering + design) but also different attitudes (i.e. engineering in genesis is not the same as engineering in industrialised). To solve this, you end up implementing a "trimodal" (three party) structure such as pioneers, settlers and town planners which is governed by a process of theft." 

Alternatively, some of the criticism boils down to a more generic argument that IT is complex and heterogeneous and no general model can really capture that complexity and heterogeneity so we shouldn’t even try.

The value of a bimodal model

To this last point, I say that all models simplify and abstract but they’re no less useful for that. They suggest common patterns and common approaches. They’re not (or shouldn’t be) intended as rigid prescriptive frameworks that precisely describe messy realities but they may offer insights into moving those messy realities forward in a practical way.

Is bimodal the only possible model? Of course not. I’m not going to argue that, say, Pioneers/Settlers/Town Planners isn't an equally valid framework. If that, or something else, works for you go for it! All I can say is that a lot of IT executives I speak with find the two-speed IT lens a useful one because it resonates with their experiences and their requirements. 

 Which suggests to me that it’s a useful model for many IT organizations at this point in time. Just don’t forget that it is, after all, only a model and a guide and not a detailed roadmap to be followed slavishly.

Photo by Stephen Shankland. Used with permission. https://www.flickr.com/photos/shankrad/24079824410/ 

Thursday, January 07, 2016

IDC survey says: Go cloud-native but modernize too

IDC’s recent survey of “cloud native” early adopters tells us that existing applications and infrastructure aren’t going away. 83 percent expect to continue to support existing applications and infrastructure for the next three years. In fact, those who are furthest along in shifting to distributed, scale-out, microservices-based applications are twice as likely to say that they are going to take their time migrating than those who are less experienced with implementing cloud native applications and infrastructure. It’s easier to be optimistic when you haven’t been bloodied yet!

IDC conducted this survey of 301 North America and EMEA enterprises on Red Hat’s behalf; the results are published in an December 2015 IDC InfoBrief entitled Blending Cloud Native & Conventional Applications: 10 Lessons Learned from Early Adopters.

Screen Shot 2016 01 07 at 11 19 49 AM

It’s worth noting that even these cloud native early adopters plan to also modernize their existing conventional infrastructure. For example, in addition to the 51 percent continuing with their virtualization plans, 42 percent plan to migrate to software-defined storage/networking and to containerize applications currently running on virtual or physical servers. 

This is an important point. The bimodal IT concept—originally a Gartnerism but now used pretty widely to connote two speeds or two modes of IT delivery—is sometimes critiqued for a variety of reasons. (To be covered in a future post.) However, perhaps the most common criticism is that Mode 1 is a Get Out of Jail Free card for IT organizations wanting to just continue down a business as usual path. This survey shows that those furthest along in transitioning to cloud-native don’t see things that way at all. (It should be mentioned that Gartner doesn’t either and sees modernization as a key component of Mode 1.)

Open source was almost universally seen as playing a key role in any such strategy with 96 percent viewing open source as an enabler of cloud native integration and conventional app modernization. No surprises there. An earlier IDC survey on DevOps early adopters found a similar view of open source with respect to DevOps tooling.

The study also found that security and integration were important elements of a cloud native transition strategy. For example, 51 percent identified security, user access control, and compliance policies as a technical factor that would have have the greatest impact on their organization’s decisions about whether applications are best supported by conventional or cloud native architectures.

The #2 factor (42 percent) was the ability to support/integrate with existing databases and conventional applications--highlighting the need for management tools and process integration between new applications and existing workflows, apps, and data stores. Business Process Optimization was identified as an important integration element. Strategies included structured and unstructured data integration, business process automation, model-driven process management, and the use of an enterprise service bus and cloud APIs.

If I had to choose one word to convey the overall gestalt of the survey, I think I’d choose “pragmatic.” IDC surveyed cloud native early adopters, so these are relatively leading edge IT organizations. Yet, these same organizations also emphasized SLAs and minimizing business risks. They stress avoiding business and end-user disruption. They plan to transition gradually.  

Links for 01-07-2015

Wednesday, January 06, 2016

Beyond general purpose in servers

Broadwell  14nm Wafer Dark

Shortly before I checked out for the holidays, I had the pleasure to give a keynote at SDI Summit in Santa Clara, CA. The name might suggest an event all about software (SDI = software-defined infrastructure) but, in fact, the event had a pretty strong hardware flavor. The organizers, Conference Concepts, put on events like the Flash Memory Summit. 

As a result, I ended up having a lot more hardware-related discussions than I usually do at the events I attend. This included catching up with various industry analysts who I’ve known since the days I was an analyst myself and spent a lot of time looking at server hardware designs and the like. In any case, some of this back and forth started to crystallize some of my thoughts around how the server hardware landscape could start changing. Some of this is still rather speculative. However, my basic thesis is that software people are probably going to start thinking more about the platforms they’re running on rather than taking for granted that they’re racks of dual-socket x86 boxes. Boom. Done.

What follows are some of the trends/changes I think are worth keeping an eye on.

CMOS scaling limits

If this were a headline, it would probably be titled “The End of Moore’s Law,” but I’m not looking for the clicks. This is a complicated subject that I’m not going to be able to give its appropriate due here. However, it’s at the root of some other things I want to cover. 

Intel is shipping 14nm processors today (Broadwell and Skylake). It’s slipped the 10nm Cannonlake out to the second half of 2017. From there things get increasingly speculative: 7nm, 5nm, maybe 3nm.

There are various paths forward to pack in more transistors. It seems as if there’s a consensus developing around 3D stacking and other packaging improvements as a good near-ish term bet. Improved interconnects between chips is likely another area of interest. For a good read, I point you to Robert Colwell, presenting at Hot Chips in 2013 when he was Director of the Microsystems Technology Office at DARPA. 

However, Colwell also points out that from 1980 to 2010, clocks improved 3500X and micro architectural and other improvements contributed about another 50X performance boost. The process shrink marvel expressed by Moore’s Law (Observation) has overshadowed just about everything else. This is not to belittle in any way all the hard and smart engineering work that went into getting CMOS process technology to the point where it is today. But understand that CMOS has been a very special unicorn and an equivalent CMOS 2.0 isn’t likely to pop into existence anytime soon. 

Intel 10nm challenges1

Moore’s Law trumped all

There are doubtless macro effects stemming from processors not getting faster or memory not getting denser (at least as quickly as in the past), but I’m going to keep this focused on how this change could affect server designs.

When I was an analyst, we took lots of calls from vendors wanting to discuss their products. Some were looking for advice. Others just wanted us to write about them. In any case, we saw a fair number of specialty processors. Many were designed around some sort of massive-number-of-cores concept. At the time (roughly back half of the 2000s), there was a lot of interest in thread-level parallelism. Furthermore, fabs like TSMC were a good option for hardware startups wanting to design chips without having to manufacture them.

Almost universally, these startups didn’t make it. Part of it is just that, well, most startups don’t make it and the capital requirements for even fabless custom hardware are relatively high. However, there was also a pattern.

Even in the best case, these companies were fighting a relentless doubling of processor speed every 18 to 24 months from Intel (and sometimes AMD) on the back of enormous volume. So these companies didn’t just need to have a more optimized design than x86. They needed to be so much better that they could overcome the aforementioned x86 inertia while competing, on much lower volume, against someone improving at a rapid predictable space. It was a tough equation. 

I saw lots of other specialty designs too. FPGAs, GPU computing, special interconnect designs. Some of this has found takers in high performance computing, which as always been more willing to embrace the unusual in search of speed. However, in the main, the fact that Moore’s Law was going to correct any performance shortcomings in a generation or two made sticking with mainstream x86 an attractive default.

The rise of specialists

In Coswell’s aforementioned presentation, he argues that the “end of Moore’s Law revives special purpose designs.” (He adds the caveat to heed the lessons of the past and not to design unprogrammable engines.) Intel’s recent $16.7 billion acquisition of Altera can be seen as part of transition to a world in which we see more special purpose chips. As the linked WSJ article notes: "Microsoft and others, seeking faster performance for tasks like Web searches, have experimented with augmenting Intel’s processors with the kind of chips sold by Altera, known as FPGAs, or field programmable gate arrays. Intel’s first product priority after closing the Altera deal is to extend that concept."

Of course, CPUs have long been complemented by other types of processors for functions like networking and storage. However, the software-defined trend has been at least somewhat predicated on moving away from specialty hardware toward a standardized programmable substrate. (So, yes, there’s some irony in discussing these topics at an SDI Summit.)

I suspect that it’s just a tradeoff that we’ll have to live with. Some areas of acceleration will probably standardize and possibly even be folded into CPUs. Other types of specialty hardware will be used only when the performance benefits are compelling enough for a given application to be worth the additional effort. It’s also worth noting that the increased use of open source software means that end-user companies have far more options to modify applications and other code to use specialized hardware than when they were limited to proprietary vendors.  

ARM AArch64

Flagging ARM as another example of potential specialization is something of a no-brainer even if the degree and timing of the impact is TBD. ARM is clearly playing a big part in mobile. But there are reasons to think it may have a bigger role on servers than in the past. That it now supports 64-bit is huge because that's table stakes for most server designs today. However, almost as important, is that ARM vendors have been working to agree on certain standards.

As my colleague Jon Masters wrote when we released Red Hat Enterprise Linux Server for ARM Development Preview: "RHELSA DP targets industry standards that we have helped to drive for the past few years, including the ARM SBSA (Server Base System Architecture), and the ARM SBBR (Server Base Boot Requirements). These will collectively allow for a single 64-bit ARM Enterprise server Operating System image that supports the full range of compliant systems out of the box (as well as many future systems that have yet to be released through minor driver updates). “ (Press release.)

There are counter-arguments. x86 has a lot of inertia even if some of the contributors to that inertia like proprietary packaged software are less universally important than they were. And there’s lots of wreckage associated with past reduced-power servers both using ARM (Calxeda) and x86-compatible (Transmeta) designs.

But I’m certainly willing to entertain the argument that AArch64 is at least interesting for some segments in a way that past alternatives weren’t.

Parting thoughts

In the keynote I gave at SDI Summit, The New Distributed Application Infrastructure, I argued that we’re in a period of rapid transition from a longtime model built around long-lived applications installed in operating systems to one in which applications are far more componentized, abstracted, and dynamic. The hardware? Necessary but essentially commoditized.

That’s a fine starting point to think about where software-defined infrastructure is going. But I increasingly suspect that makes a simplifying assumption that increasingly won’t be the case. The operating system will help to abstract away changes and specializations in the hardware foundation as it has in the past. But that foundation will have to adjust to a reality that can’t depend on CMOS scaling to advance.

Tuesday, January 05, 2016

Getting from here to there: conventional and cloud-native

390311509 02f5d62b2b b

Before the holiday break, I wrote a series of posts over at the Red Hat Stack blog in which I added my thoughts about cloud native architectures, bridging those architectures with conventional applications, and some of the ways to think about transitioning between different architectural styles. My jumping off point was an IDC Analyst Connection in which Mary Johnson Turner and Gary Chen answered five questions about "Bridging the Gap Between Cloud-Native and Conventional Enterprise Applications." Below are those questions and the links to my posts:

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

http://redhatstackblog.redhat.com/2015/11/19/does-cloud-native-have-to-mean-all-in/

What are the typical challenges that organizations need to address as part of this evolution [to IT that at least includes a strong cloud-native component]?

http://redhatstackblog.redhat.com/2015/11/30/evolving-it-architectures-it-can-be-hard/

How will IT management skills, tools, and processes need to change [with the introduction of cloud-native architectures]?

http://redhatstackblog.redhat.com/2015/12/03/how-cloud-native-needs-cultural-change/

What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?

http://redhatstackblog.redhat.com/2015/12/09/why-cloud-native-depends-on-modernization/

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

http://redhatstackblog.redhat.com/2015/12/15/integrating-classic-it-with-cloud-native/

Photo credit: Flickr/CC Scott Robinson https://www.flickr.com/photos/clearlyambiguous/390311509 

Monday, January 04, 2016

What's up with Gordon in 2016?

First off, let me say that I’m not planning big changes although I’m sure my activities will continue to evolve as the market does. Red Hat’s doing interesting work in a diverse set of related areas and I’ll continue to evangelize those technologies, especially as they span multiple product sets. With that said, here’s how the year looks to be shaping up so far.

Travel and speaking. Last year broke a string of most-travel-ever years with airline mileage ending up “just” in the 60,000 mile range. This was partially because I didn’t make it to Asia last year but it was still a somewhat saner schedule overall. It remains to be seen what this year will bring but I’ll probably shoot for a similar level this year.

I already know I’ll be at Monkigras in London, ConfigMgmtCamp in Gent, CloudExpo in NYC, Interop in Vegas, and IEEE DevOps Unleashed in Santa Monica. I also typically attend a variety of Linux Foundation events, an O’Reilly event or two, Red Hat Summit (in SF this year), and VMworld (although I always say I won’t); I will probably do most of these this year as well. I may ramp things up a bit—especially for smaller gatherings—in my current areas of focus, specifically DevOps and IoT. This translates into DevOps Days and other events TBD.

If there’s some event that you think I should take a look at or would like me to speak, drop me a line. Note that I’m not directly involved with sponsorships, especially for large events, so if you’re really contacting me to ask for money, please save us both some time.

Writing. I have various papers in flight at the moment and need to map out what’s needed over the next six months or so. I also begin the year with my usual good intentions about blogging, which I was reasonably good about last year. My publishing schedule to this blog was down a bit but but I’ve also been writing for opensource.com, redhatstackblog.redhat.com, and openshift.com—as well as a variety of online publications.

You’re reading this on my "personal" blog. It's mostly (75%+) devoted to topics that fall generally under the umbrella of "tech." I generally keep the blog going with short link-comments when I'm not pushing out anything longer. The opinions expressed on this blog are mine alone and the content, including Red Hat-related content, is solely under my control. I’m also cross-posting to Medium when I feel it’s justified.

My biggest ambition this year is to publish a new book. This has been the subject of on-again, off-again mulling for the last 12 to 18 months or so. I began with the intent to just revise Computing Next to bring in containers and otherwise adapt the content to the frenetic change that’s going on in the IT industry today. However, as time went on, this approach made less and less sense. Too much was different and too many aspects required reframing.

Therefore, while I will probably repurpose some existing content, I’m going to start fresh. The pace of change still makes writing a book challenging but, given where we are with containers, xaaS, DevOps, IoT, etc. I’m hopeful that I can put something together that has some shelf life. My current plan is to shoot for something in the 100-120 page range (i.e. longer than pamphlet-style but shorter than a traditional trade paperback)  for completion by the summer. I’d really like to have it done by Red Hat Summit but we’ll see how likely this is. Working title is Phase Shift: Computing for the new Millennium and  it will focus on how new infrastructure, application development trends, mobile, and IoT all fit together. 

Podcasts. I grade myself about a B for podcasts last year. I think I had some good ones but wasn’t as aggressive about scheduling recording sessions as I could have been. I expect this year will end up similarly although I’m going to make an effort to bring in outside interview subjects on a variety of topics. I find 15 minute interviews are a good way to get interesting discussions out there without too much effort. (And I get them all transcribed for those who would prefer to read.)

Video. Video seems to be one thing that largely drops off my list. It’s takes a fair bit of work and I’ve struggled with how to use it in a way that adds value and is at least reasonably professional looking. It probably doesn’t help that I’m personally not big into watching videos when there are other sources of information. 

Social networks. I am active on twitter as @ghaff. As with this blog, I concentrate on tech topics but no guarantees that I won't get into other topics from time to time.

I mostly view LinkedIn as a sort of professional rolodex. If I've met you and you send me a LinkedIn invite, I'll probably accept though it might help to remind me who you are. I'm most likely to ignore you if the connection isn’t obvious, you send me a generic invite, and/or you appear to be just inviting everyone under the sun. I also post links to relevant blog posts when I remember.

I'm a pretty casual user of Facebook and I limit it to friend friends. That's not to say that some of them aren't professional acquaintances as well. But if you just met me at a conference somewhere and want to friend me, please understand if I ignore you.

I use Google+ primarily as an additional channel to draw attention to blogs and other material that I have created. I also participate in various conversations there. As with twitter, technology topics predominate on my Google+ stream.

I use flickr extensively for personal photography.

Links for 01-04-2016

Friday, December 18, 2015

Review: Beddit Smart Sleep Monitor

PXVEbkGGxeYbmzIwslH6 h3i9pmYrLgw7v4KK21qxXg YU0nKFpde0IFjuavcL5PyO1W8uQSjLhmvpX BPgpMFQ HglzCg EyzvJBpBVcQft91iM3ZDxxRL3qwtT f3pEPY

Much of the focus on activity and health trackers is on wearables. Think Fitbit, Apple Watch, and so forth. However, arguably, this isn’t the best approach for sleep tracking given that such devices need to be plugged in every few days or so and nighttime is the most logical time to do so. 

The $150 Beddit Smart offers an alternative. It’s a thin strip that lays across your mattress, plugs into a USB port for power, and interfaces with iOS and Android apps over Bluetooth. In my testing, the strip worked even if it was underneath some amount of padding—a featherbed in my case. (The photo is a bit misleading; normally the sensor would be under at least a sheet.)

The strip is a force sensor which measures mechanical cardiac activity using ballistocardiographs (BCG). According to the company, "Each time the heart beats, the acceleration of blood generates a mechanical impulse that can be measured with a proper force sensor.” By contrast, sleep clinics generally use polysomnography (PSG), which is basically a fancy way of saying that they use a variety of data from different sources to measure things like brain waves and eye movements.

This brochure provides more details about the science behind the device. While I certainly didn’t have the equipment to personally calibrate results against medical equipment, the Beddit’s data appeared to be at least roughly consistent with the readings from my Fitbit Charge HR over the same period. The Beddit, however, provides more detailed tracking of how much you’re moving around. In conjunction with a smartphone, it can also track snoring. (The Fitbit relies on data from its accelerometer which can only measure comparatively gross movements.) 

If you really want to geek out, here’s a PhD thesis by Joonas Paalasmaa, the CTO and Chief Scientist of Beddit, from the University of Helsinki about monitoring sleep with force sensor measurement. A force sensor is a thin and flexible force sensing resistor in an electrical circuit. When the force sensor is unloaded, its resistance is very high. When a force is applied to the sensor, this resistance decreases. Various techniques can then be applied to the resistance data to infer heart rate and respiration, which can then be correlated to sleep state.  

Part of the motivation behind using the ballistocardiograph approach is that it can be practically implemented in a consumer product, while providing more detailed information than a worn accelerometer can provide. At the same time, as Paalasma’s thesis notes, mainstream sleep monitoring systems require the use of wearable sensors that can degrade the quality of sleep. "The unobtrusive measurement approach is particularly attractive for long-term use at home—even months or years—because the sensors are not expensive and no discomfort is caused to the user."

The sensor is unobtrusive. You do need to be sleeping on it though so if you move around a king-size bed, you’ll lose some results. Over the course of my testing, there were a couple of times during the night without data—presumably because I rolled off the sensor. Other than this reality inherent in a device you’re not wearing, I didn’t find anything about the device that didn’t work as advertised.

So, are you a potential customer for this?

If you have a desire to specifically track sleep, I’m inclined to give this device the nod over a Fitbit Charge HR (which is the wearable I have personal experience with). The fact that you can pretty much plug in the Beddit and forget about it gives it an advantage over a wearable that needs to be taken off and recharged every few days. Furthermore, the sleep data is more detailed in the case of the Beddit and is directly based on academic scientific research. The downside is that fitness bands track more than sleep and also aren’t constrained to being installed on a single bed. 

ASyUHKVybTQd8T668pBFp eFdKu9cWHw09FCdFR 25k

The broader question, and it’s one that I have about many wearables, boils to something like this. OK, now that you’ve had a few days of fun and looked at your graphs, so what? How quantified does your life really need to be?

One answer of course is not very. The Beddit Smart tells me I generally sleep pretty well. But I pretty much knew that. (And my Fitbit tells me that I sometimes sleep poorly when I travel. I knew that too. It also tells me that when I spend all day writing, I don’t walk enough. Sadly, I know that as well.)

On the other hand, I could certainly see someone who doesn’t think they’re sleeping well finding a device like this a relatively inexpensive way to get some data before taking more serious steps to get to the root of the problem. The CDC estimates 50 to 70 million Americans have a sleep or wakefulness disorder.

I guess you could also try to quantify the degree to which a cup of expresso after dinner ruins your sleep although, like fitness tracking generally, I tend to be rather less systematic about such things.

Bottom line: Especially if you don’t already have a fitness band that tracks sleep, the Beddit Smart worked as advertised and is a good choice if you want to quantify your sleep patterns.

Disclaimer: The company provided me with a review unit but the opinions expressed in this review are strictly my honest evaluation of the product.

Tuesday, December 15, 2015

Links for 12-15-2015

Thursday, December 03, 2015

Presentation: The new distributed application infrastructure (SDI Summit 2015)

Today’s workloads require a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker (or other packaging methods), distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed, scalable applications, work with a wide variety of open source packages, and provide a universally understandable interface for developers and administrators worldwide.

This is the keynote I gave at the SDI Summit in Santa Clara on December 2, 2015. It discussions the evolution from essentially server-centric infrastructure to a more dynamic containerized one. I'll discuss portions of this presentation in greater detail in future posts.

Wednesday, December 02, 2015

Links for 12-02-2015

Monday, November 30, 2015

DevOps initiatives shouldn't just touch the new stuff

Abstracts all

Although I feel as if it’s been dispelled to a significant degree, there lingers the misperception that DevOps [1] is mostly for companies that sport ping-pong tables and have free sushi for lunch. Firms that manufacture construction equipment and have large swaths of legacy computer code? Not so much.

It’s not particularly surprising that this misperception exists. A traditional IT organization glances at a company like Netflix and they may see a unicorn wholly unlike themselves. They’re not even entirely wrong. More extreme implementations of approaches such as microservices or near-continuous production releases likely won’t become the norm—especially in the “classic IT” (aka Mode 1) parts of their infrastructure. However, that doesn’t mean DevOps principles can’t also benefit the conservative IT of conservative firms.

It’s about the software

The first reason that DevOps practices apply outside of greenfield, cloud-native (aka Mode 2) IT is that the rules are changing. The “software is eating the world” meme has become something of a cliche but it’s no less true for that. As my colleague James Labocki wrote in a recent post, "Bank of America is not just a bank, they are a transaction processing company. Exxon Mobil, is not only an oil and gas company, they are a GIS company. With each passing day Walgreens business is more reliant on electronic health records.” Furthermore, as James also noted in that post, these shifts in technology and how business is transacted are creating new competitors that come at you from non-obvious directions and places. 

Therefore, while the priorities for classic IT may be different from those of cloud-native, it still needs to change. I’ll go so far as to say that calling this “legacy” is a potentially dangerous turn of phrase as it implies static and in need of wholesale replacement. In fact, to quote James one last time:

In mode-1 they [IT] are looking to increase relevance and reduce complexity. In order to increase relevance they need to deliver environments for developers in minutes instead of days or weeks. In order to reduce complexity they need to implement policy driven automation to reduce the need for manual tasks.

 Getting there requires DevOps tools and approaches (together with policy-based hybrid cloud management).

DevOps thinking is proven to work in traditional industries

I thought DevOps was pretty new, you cry! In some ways, DevOps as we usually talk about it today is indeed the child of pervasive open source, continuous integration technologies, platform-as-a-service (PaaS), software-defined infrastructures, and a host of other relatively modern technologies. However, as Gartner points out in “DevOps is the Bimodal Bridge” (paywall):

Mode 1 organizations can use systems thinking for incremental improvements, such as reductions in waste and improved risk mitigation. While DevOps has embraced these methodologies, the concepts have, in fact, decades of real-world application in manufacturing and other industries.

(Here's one version of a presentation I give from time to time about the lessons from manufacturing for DevOps on Slideshare.)

Gartner also maintains that: "there are many elements in DevOps that may, in fact, apply across the modal spectrum. It is our firm belief that by 2020, at least 80% of the practices identified with DevOps and Mode 2 will be adopted by traditional Mode 1 groups for the overall benefit of the organization."

The need to work across IT

One number from a recent IDC InfoBrief sponsored by Red Hat “DevOps, Open Source and Business Agility: Lessons Learned from Early Adopters” (June 2015) popped out for me even in the context of multi-modal IT.

A majority of organizations (51 percent) don’t plan to have a dedicated DevOps organization. (36 percent do and 13 percent were unsure.) From my perspective, this is mostly a positive result. While dedicated organizations may suggest commitment and focus, they can equally mean stovepiped projects that don’t address the needs of or solve problems for the mainstream IT organization. As a result, their scope may be limited and fail to integrate with core IT systems. 

As Cameron Haight notes in another Gartner research note: "Initial DevOps toolchains are often focused on tactical integration scenarios, thereby restricting the ability to develop more flexible, general-purpose architectures."

Even when it makes sense to initiate DevOps as a pilot project, it’s important to keep attention (of both management and the DevOps folks doing the hands-on work) focused on the end business benefits, which should be the ultimate drivers. In the aforementioned IDC InfoBrief, employee productivity and business revenues were seen as important DevOps business impacts. But the #1 impact? Increase customer satisfaction and engagement. You’re not going to achieve that with a project touching a small portion of your IT. 

[1] Here’s how we define DevOps at Red Hat. 

DevOps is an approach to culture, automation, and platform design for delivering increased business value and responsiveness through rapid, iterative, and high-quality IT service delivery. It applies open source principles and practices with: 

  • Culture of collaboration valuing openness and transparency
  • Automation that accelerates and improves the consistency of application delivery
  • Dynamic software-defined and programmable platform

Friday, November 20, 2015

My fave carry-on luggage

Luggage

I travel a lot. Sometimes too much. And I get asked by a lot of friends and acquaintances about gear and other preferences. I’ve been meaning to write some of this down for a while. Here’s my start.

Let’s start with my biases. I avoid checking luggage whenever possible, which is mostly any week-long business trip to start with and many other cases as well. I consider roll-aboards to be the instrument of the devil for anyone who is otherwise physically able to carry a shoulder bag or doesn’t have another specific need. They hog overhead space and trip you up on concourses. You should require a handicapped sticker to use one. 

So soft luggage. Carry-on. What are my preferred options?

My go-to for business travel is the Patagonia MLC. (MLC = Maximum Legal Carryon) It’s got a nice shoulder strap as well as some thin backpack straps. Bomber zippers. A couple of outside compartments suitable for typical travel gear like pens, earphones, Kleenex, etc. My friend and former colleague Stephen O’Grady has called it the perfect luggage. 

I don’t go that far. A couple demerits:

The primary thing that I find sub-optimal about the MLC is that it divides the main compartment vertically. I find that this makes it difficult to pack rectangular or square-ish shapes or even bulky shoes. I get the desire to create separate zones in luggage but generally I’d just as soon use stuff-sacks, laundry bags, Eagle Creek cubes, or even a supermarket plastic bag within a larger space to separate dirty clothes and the like.

The zippers are also wrap-around. This makes it somewhat easier to squeeze in tight loads but it also makes it easier for casually closed zippers to shed contents in the middle of an airport. 

I’d also note that the thin backpack straps are intended for carrying modest loads for modest distances. But the MLC isn’t really intended as a “travel backpack.” It’s a reasonable tradeoff given that the backpack straps are not the focus of the luggage.

An alternative that I also use regularly is the Osprey Porter 46, which is much more explicitly in the vein of a travel backpack without silly distractions like the wheels or rigid hunks of material that many products in the category sport. While I wouldn’t want to carry it on my shoulders were it filled with lead weights, the shoulder straps are reasonably padded and it also includes a hip belt. Like the Patagonia bag the zippers and general quality are all solid.

While not rigid, the Osprey pack does loosely hold its shape. It’s primarily one large compartment although there’s a zipper at the top to a small compartment that basically takes its volume from the main compartment. As noted with the Patagonia, I’m generally good with the flexibility of this approach.

The Osprey is very much a travel backpack. It has a well-made padded handle but there’s no shoulder strap and it’s not really designed to be carried other than as a backpack. I generally take the Osprey when I know I’m going to be schlepping my luggage around a lot on foot, while I take the Patagonia on a more typical business trip.

There’s also an Osprey Porter 65, which has a 65L volume rather than a 46L volume but is otherwise identical to the smaller model. This bag is not airline carryon compliant but is typically fine for trains. Now, I’m certainly not going to encourage people to take oversized bags on planes, but I would note that this is a relatively soft compressible bag so it can generally be put in an overhead if it’s only partially filled. I’ve done this at times when I’ve wanted the extra space at my destination to consolidate my laptop etc. bag in my luggage for walking around cities or traveling on trains or when I’ve wanted some extra space for purchases that I can then check for the trip home. 

Links for 11-20-2015

Survey says: Who owns DevOps strategy?

Screen Shot 2015 10 20 at 8 57 29 AM

I’ve previously written about the overall results from IDC’s “DevOps, Open Source and Business Agility: Lessons Learned from Early Adopters” InfoBrief study sponsored by Red Hat. I encourage you to take a look as there’s a lot of interesting data about enabling technologies, Platform-as-a-Service (PaaS), open source, and desired software support models.[1] This post though dives into a specific result that ended up on the cutting room floor when the final InfoBrief was edited.

"Of the following stakeholder groups, which has the primary responsibility for driving your organization's DevOps strategy?"

The plurality but not the majority (38 percent) said that traditional application development teams had the responsibility. Other common answers included traditional IT operations teams (19 percent), dedicated DevOps teams (17 percent), and corporate C-level executive teams (13 percent).

I don’t find those overall numbers particularly surprising. DevOps tends to be thought of as being more about accelerating application development and release cycles than streamlining infrastructure operations.[2] So it’s pretty natural that devs would be seen as driving an initiative that most directly impacts them. (That said, in another survey question, 47 percent said that IT operations staff efficiency/productivity improvement was a primary DevOps goal so there are absolutely both dev and ops benefits.)

I might have expected to see more dedicated DevOps organizations driving strategy, at least in today’s early going. [3] However, our internal experience at Red Hat is that dedicated organizations can end up operating independently of the existing IT organization—making it hard to tie into existing apps and infrastructure. Therefore, I find the fact that early adopters are mostly viewing DevOps as something to be driven as part of mainstream IT rather than as an off-to-the-side project a good thing.

Slice the data based on how app devs answered and how IT ops answered though and things get interesting (if still not wholly unexpected).

It’s apparently quite obvious to your average developer who is or ought to be running the DevOps show. They should (76 percent) with another 10 percent allowing for the possibility of a dedicated organization driving the strategy. A mere 3 percent have IT ops driving things.

How did IT Ops answer? Well, they’re even more certain than devs that their counterparts shouldn’t be running DevOps with only 2 percent saying that traditional application development organizations have the primary responsibility for driving DevOps strategy. Beyond that near-unanimity though, they’re pretty divided. Only 34 percent said the traditional IT operations team should be in charge. Other responses were split between a dedicated team (24 percent), a corporate C-level executive team (21 percent), line of business decision makers (7 percent), or even a service provider like a system integrator (9 percent).

Pretty much anyone except their own developers I guess.

[1] Survey respondents were 220 IT decision makers in the US and UK who were either currently using DevOps in production or evaluating/testing DevOps.

[2] I’d argue that this dev-centric view isn’t the best way to think about DevOps, but it’s common.

[3] Note, however, that this question was specifically about who is driving or will drive strategy. A materially higher number (35 percent) have or plan to have a dedicated DevOps organization. That organization apparently just won’t drive strategy in many cases.

Tuesday, October 20, 2015

How open source is increasingly about ecosystems

Fish ecosystem by nerdqt87

When we talk about the innovation that communities bring to open source software, we often focus on how open source enables contributions and collaboration within communities. More contributors, collaborating with less friction.

However, as new computing architectures and approaches rapidly evolve for cloud computing, for big data, for the Internet-of-Things, it’s also becoming evident that the open source development model is extremely powerful because of the manner in which it allows innovations from multiple sources to be recombined and remixed in powerful ways. Consider the following examples. 

Containers are fundamentally enabled by Linux. All the security hardening, performance tuning, reliability engineering, and certifications that apply to a bare metal or virtualized world still apply in the containerized one. And, in fact, the operating system arguably shoulders an even greater responsibility for tasks such as resource or security isolation than when individual operating system instances provided a degree of inherent isolation. (Take a look at the fabulous Containers coloring book by Dan Walsh and Máirín Duffy for more info on container isolation.)

What’s made containers so interesting in their current incarnation—the basic concept dates back over a decade—is that they bring together work from communities such as Docker that are focused on packaging applications for containers and generally making containers easier to use with complementary innovations in the Linux kernel. It’s Linux security features and resource control such as Control Groups that provide the infrastructure foundation needed to safely take advantage of container application packaging and deployment flexibility. Project Atomic then brings together the tools and patterns of container-based application and service deployment. 

We see similar cross-pollination in the management and orchestration of containers across multiple physical hosts; Docker is mostly just concerned with management within a single operating system instance/host. One of the projects you’re starting to hear a lot about in the orchestration space is Kubernetes, which came out of Google’s internal container work. It aims to provide features such as high availability and replication, service discovery, and service aggregation. However, the complete orchestration, resource placement, and policy-based management of a complete containerized environment will inevitably draw from many different communities.

For example, a number of projects are working on ways to potentially complement Kubernetes by providing frameworks and ways for applications to interact with a scheduler. One such current project is Apache Mesos, which provides a higher level of abstraction  with APIs for resource management and scheduling across cloud environments. Other related projects include Apache Aurora, which Twitter employs as a service scheduler to schedule jobs onto Mesos. At a still higher level, cloud management platforms such as ManageIQ extend management across hybrid cloud environments and provide policy controls to control workload placement based on business rules as opposed to just technical considerations.

We see analogous mixing, matching, and remixing in storage and data. “Big Data” platforms increasingly combine a wide range of technologies from Hadoop MapReduce to Apache Spark to distributed storage projects such as Gluster and Ceph. Ceph is also the typical storage back-end for OpenStack—having first been integrated in OpenStack’s Folsom release to provide unified object and block storage. 

In general, OpenStack is a great example of how different, perhaps only somewhat-related open source communities can integrate and combine in powerful ways. I previously mentioned the software-defined storage aspect of OpenStack but OpenStack also embeds software-defined compute and software-defined networking (SDN). Networking’s an interesting case because it brings together a number of different communities including Open Daylight (a collaborative SDN project under the Linux Foundation), Open vSwitch (which can be used as a node for Open Daylight), and network function virtualization (NFV) projects that can then sit on top of Open Daylight—to create software-based firewalls, for example. 

It’s evident that, interesting as individual projects may be taken in isolation, what’s really accelerating today’s pace of change in software is the combinations of these many parts building on and amplifying each other. It’s a dynamic that just isn’t possible with proprietary software.

Links for 10-20-2015

Wednesday, October 14, 2015

VMs, Containers, and Microservices with Red Hat's Mark Lamourine

Markl

In this podcast, my Red Hat engineering colleague Mark Lamourine and I discuss where VMs fit in a containerized world and whether microservices are really the future of application architecture and design. Will organizations migrate existing applications to new containerized infrastructures and, if so, how might they go about doing so?

Listen to MP3 (0:17:50)

Listen to OGG (0:17:50)

[TRANSCRIPT]

Gordon Haff:  Hi, everyone. This is Gordon Haff in Cloud Product Strategy at Red Hat. I'm here with my much more technical colleague, Mark Lamourine. Today we're going to talk about containers and VMs.

I'm not going to give away the punch line here, but a lot of our discussion is, I think, going to ultimately circle around the idea of "How purist do you want to be in this discussion?"

Just as a level set, Mark, how do you think of containers and VMs? I hesitate to say containers versus VMs. But how do you think about their relationship?

Mark Lamourine:  It's actually characterized pretty nicely in the name. A virtual machine is just that, you're emulating a whole computer. It has all the parts, you get to behave as if it's a real computer. You can treat it in kind of the conventional way with an operating system and set up and configuration management.

A container is something where it's just much more limited. You don't expect to live in a container. It's something that serves your needs, has just enough of what you need for a period of time and then maybe you're done with it. When you're done, you set it aside and get another one.

Gordon:  They're both abstractions, containers and VMs are both abstractions at some level.

Mark:  There are situations now, at least, where you might want to choose one over the other. The most obvious one is a situation where you have a long lived process or a long‑lived server.

In the past, you would buy hardware and you'd set up these servers. More recently, you would set up a VM system, whether it's OpenStack or whatever. You'd put those machines there.

They tend to have a fairly long life. You apply configuration management and you update them periodically, and they probably have uptimes on the order of hundreds of days. If you've been in a really good shop, most places have one with many hundreds of days uptime for services like that, for very stable, unitary, monolithic services.

Those are still out there. Those kinds of services are still out there.

Containers are more suited, at this point, for more transient systems, situations where you've actually got good, either where you have a short term question, some service you want to set up for a brief period of time and tear down. Because that service is really going to calculate the answer to some query or pull out some big data and then you're going to shut it down and replace it.

Or other situations where you have a scaling problem. This is purely speculation, but I can imagine NASA, when they have their next Pluto flyby or whatever, needing to scale out and scale back. In that case, you know that those things are transient, so putting up full VMs for a web server that's just passing data through may not make sense.

On the other hand, the databases on the back end, those may need either real hardware or a virtual machine, because they're going, the data is going to stay. But the web servers may come and go based on the load.
I see containers and VMs still as both having a purpose.

Gordon:  Now, you used "At this point" a couple of times. What are some of the things, at least hypothetically, containers would need to do to get to the point where they could more completely, at least in principle, replace VMs?

Mark:  One of the things I'm uncomfortable at this point putting out about containers is that people talk about containers being old technology. While that's true in a strict sense, we've had a container like things even back as far as IBM mainframes and MVS.
It's just recently, in the last three or four years, become possible to use them everywhere, and to use them in ways we've never tried before and to build them up quickly and to build aggregations and combinations.

We're still learning how to do that. We're still learning how to take what, on a traditional host or VM, would be, you put several different related services inside one box or inside one VM and you configure them all to work together and they share the resources there.
In the container model, the tendency is to decompose those into parts. We're still not really good yet at providing the infrastructure to quickly, reliably and flexibly set up moderate to large complex containerized services.

There are exceptions, obviously, people like Google and others. There are applications that work now at a large scale. But not, I think, in kind of the generalized way that I envision.

When that becomes possible, when we learn the patterns for how to create complex applications to take a database container off the shelf and apply three parameters and connect it to a bunch of others and have it be HA automatically, then I think there might be a place for that.

The other area is the HA part, where you can afford to have, you could create a long‑lived service from transient containers. When you've got HA mechanisms well enough worked out that when you do an update, if you need to do an update to a single piece, you kill off a little bit, and you start up another bit with the more recent version. You gradually do a rolling update of the components and no one ever sees the service go down.

In that case, the service becomes long‑lived. The containers themselves are transient, but no one notices. When that pattern becomes established when we learn how to do that well, it's possible that more and more hardware or VM services will migrate to containers. But we're not there yet.

Gordon:  I'm going to come back to that point in just a moment. I think one other thing that's worth observing, and we certainly see this in terms of some of the patterns around OpenStack is, we very glibly talk about this idea of having cattle workloads. You just kill off one workload, doesn't really matter and so forth.

In fact, there's a fairly strong push to build in, for instance, enterprise virtualization types of functionality into something like OpenStack, for example, so you can do things like "Live migration." Because, in fact, it's easy to talk about cattle versus pets, workloads. But like many things, the real world is more complicated than simple metaphors.

Mark:  Yes, and I think that the difference is still knowledge. We talk about cattle, we talk about having these large independent, I don't care, parts.

Currently, it's still hard to build for a small company, perhaps. It's hard to build something that has the redundancy necessary to make that possible. In a fairly small company, you're either going to go to the cloud or if you're moderate‑sized, you're going to have something in‑house.

The effort of making something a distributed HA style service for your mail system or for whatever your core business is, it's still hard. It's easier to do it as a monolith, and as long as the costs associated with the monolith are lower than the costs associated with starting up a distributed service, an HA service, people are going to keep doing it.

When the patterns become well enough established that the HA part disappears down into the technology, that's when I think more of this kind of, it might really be cattle underneath.

Gordon:  Right. We see a lot of parallels here with things like parallel programming and so forth, is that when these patterns really have become well established, one of the key reasons they have been able to become well established is that the plumbing, so to speak, or the rocket science needed to really do these things is being submerged in underlying technology layers.

Mark:  That's actually what Docker is. That's really what Docker and Rocket both have done. They've taken all of the work of LXC or Solaris containers and they've figured out the patterns. They've made appropriate assumptions and then they've pushed them down below the level where an ordinary developer has to think about them.

Gordon:  Coming back to what we were talking about a few minutes ago, we were talking a little bit about distributed systems and monoliths versus distributed services and so forth.

From a practical standpoint, let's take your typical enterprise today where they're starting to migrate some applications or designing new applications, which may be the more common use case here, and they want to use containers. There's actually some debate over how we start to develop for these distributed systems.

Mark:  You hit on two different things and I want to go back to it. One is the tendency to migrate things to containers, and the other one to develop new services in containers.

It seems to me there's a lot of push for migration as opposed to, there are people who are developing new things. It seems like there's an awful lot of push to migration. The people, people want to jump into the cloud. They want to jump into containers. They're like, "Well, containerize this. Our thing, it needs to be containerized," without really understanding what that means in some cases.

That's the case where, which direction do you go? Do you start by pulling it apart and putting each piece into containers? Or do you stuff the whole thing in and then see what parts you can tease out?

I think it really depends on the commitment of the people and their comfort with either approach. If you've got people who are comfortable taking your application and disassembling it and finding out the interfaces up front and then rebuilding each of those parts, great, go for it.

If, and this actually makes me uncomfortable, because I prefer to decompose it. But if you've got something where you can get it as a monolith or small pieces into one or a small number of containers and it does your job, I can't really argue against that. If it moves you forward and you learn from it, and it gets the job done, go ahead. I'm not going to tell you that one way is right or wrong.

Gordon:  To your earlier point, in many cases, it may not even make sense to move to a container, it's working fine in a VM environment. You don't really get brownie points by containerizing it.

Mark:  I think that it seems like there's a lot of demand for it. Whether or not the demand is justified, again, is an open question.

Gordon:  A lot of people, they use different terms, but use Gartner terminology as its mode 1 IT and this mode 2 IT. Really, the idea with mode 1 IT is, you very much want to modernize where appropriate, for example, replacing legacy Unix with Linux and bringing more DevOps principles into your software development and so forth. But you don't need to wholesale, replace it or wholesale migrate it.

Whereas, your new applications are going to be more developed in kind of mode 2 infrastructure with mode 2 techniques.
We've kind of been talking about how you migrate or move, assuming that you do. How about for new applications? There are actually even seems to be some controversy with various folks in the DevOps "movement" or in Microservices and so forth over, what's the best way to approach developing for, new IT infrastructures.

Mark:  Microservices is an interesting term because it has a series of implications about modularity and tight boundaries and connections between the infrastructure that...

To me, Microservices almost seem like an artificial, it's an artificial term. It's something that represents strictly decomposed, very, very short‑term components. I find that to be an artificial distinction. Maybe it's a scale issue, that I see services as a set of cooperating communicating parts.

Microservices is just an extreme of that, where you take the tiniest parts and set a boundary there and you then you build up something with kind of a swarm of these things.

Again, I think that we're still learning how this stuff works. People are still exploring Microservices, and they'll look back and say, "Oh yeah, we've done stuff like this with," I think it's SOA applications and SOAP and things like that.

But if you really look at it, there are comparisons, but there are also significant differences. I think the differences are sometimes overlooked.

Gordon:  One of the examples that I like to bring up is, there's a lot of attention paid to Netflix, for example, for which is their famously this super Microservices type of architecture.

But the reality is, there's other web companies out there, like, Etsy, for example, who are also very well known for being very DevOpsy. They speak at a lot of conferences and the like. They basically have this big monolithic PHP application. Having a strict Microservices architecture isn't necessary to do all this other stuff.

Mark:  It shifts your knowledge and your priorities. The Netflix model lends itself well to these little transient services. When a customer asks for something, I haven't watched their talks, but I'm assuming what that trigger is the cascade of lots of little apps that start up and serve them what they have. When they're done, those little services get torn down and they're ready for the next one.

There are other businesses where that isn't necessarily the right model. Certainly, as your examples show, you can do it either way. I guess each business needs to decide for themselves where the tipping points are for migration from one to the other.

Gordon:  Yeah. I think if I had to kind of summarize our talk here, and maybe it's a good way to close things out is, there are a lot of interesting new approaches here which certainly, at least there is some unicorns using very effectively. But it's still sort of an open question over the broader mainstream, the majority, late majority, slower adopters, how this plays out across a wider swath of organizations.

Mark:  I think what we have now is a set of bespoke, hand‑crafted, they're doing things at scale. At Netflix, they're doing things at a large scale. They had to develop a lot of that for themselves.
Now, it means that a lot of what used to be human intensive operations are now done automatically. That doesn't necessarily generalize.

That's where I think there's still a lot of work to be done, to look at the Netflix's, to look at the other companies that are strongly adopting Microservices, especially for, well, both for inside and outside services. Because you could say the same thing for inside a company.
I think over the next four or five years, we'll see those patterns emerge. We'll see the generalization happen. We'll see the cases where people identify, "This is an appropriate way and we've documented it, and someone's bottled it so that you can download it and run it and it will work."

But I think we're still a few years out from generic, containerized services, new ones, at the push of a button. Still requires a lot of custom work to make them happen.