Friday, January 22, 2016

Book Review: Cooking for Geeks, Second Edition

As a single book, this combination of food science, recipes, equipment, ingredients, experimentation, interviews, and geeky humor is hard to beat. It’s not necessarily deep in all those areas, but it’s an appealing total package for those curious about the why’s of food. 

It’s the second edition of this book by Jeff Porter. At 488 pages, it’s about 50 pages longer than its predecessor. There are new interviews and graphics along with a fair bit of updating and rearranging from the prior edition—although the overall, look, feel, and organization aren’t a major departure. 

The book is written in a lighthearted and gently humorous way. Random sample from the intro to Taste, Smell, and Flavor: “You open your fridge and see pickles, strawberries, and tortillas. What do you do? You might answer: create a pickle/strawberry burrito. Or if you’re less adventurous, you might say: order pizza. But somewhere between making a gross-sounding burrito and ordering takeout is another option: figuring out the answer to one of life’s deeper questions: How do I know what goes together?” Probably not to everyone’s taste I realize, but it works for me.

It covers a broad swath of the science. The aforementioned tastes, smells, and flavors. Time and temperature—and what those mean for cooking proteins and melting fats. Additives like emulsifiers and thickening agents. Air, water, and leavening agents.It’s not the science tome that is Harold McGee’s On Food and Cooking, but it’s a more easily accessible presentation. (Though, if you read this book and enjoy it, by all means pick up McGee and vice versa.)

Cooking for Geeks at least touches on most of the major modernist cooking practices including sous vide and practical tips for same. Arguably, some of the DIY material around sous vide is a bit dated given the price drops of modern immersion circulators but this is about experimentation after all. (The link in the book does go to a list of current equipment options though.) There are also interviews with many of the usual suspects in that space such as Nathan Myhrvold and Dave Arnold.

Is this book for the cooking-curious geek who doesn’t have much real cooking experience? It could be but they might want to pair this book with another that was more focused on basic cooking techniques. The recipes here are relatively straightforward and the instructions are clear, but there’s not a lot of photography devoted to the recipes and the instructions for things like Béchamel Sauce are probably a bit bare-bones for a first-timer. 

I’d also generally note that the recipes are often there to provide examples of the science discussion. There isn’t a lot of discussion about why this specific recipe is being made with this specific set of techniques. For that sort of thing, I recommend book(s) from the America’s Test Kitchen empire, perhaps starting with their The New Best Recipes book—which also has the virtue of being a pretty comprehensive set of basic and not-so-basic recipes. It’s also rather sober and by-the-numbers, a much different experience. (Alton Brown also seems to have his followers in geeky circles although I’ve never personally been all that enthusiastic.)

One final point is that, for many, this is a book you will flip through and alight on a topic of interest. It’s not that you couldn’t read it from cover to cover, but the many sidebars and interviews and short chunks of material seem to encourage non-linear exploration. 

Bottom line: 5/5. Highly recommended for anyone with an interest in the science of cooking even if they don’t want to get into deep chemistry and physics.

Disclaimer: This book was provided to me as a review copy but this review represents my honest assessment.

Links for 01-22-2016

The new distributed application development platform: Breaking down silos

2200884398 7d9fd616a0 o

A document came out of Gaithersburg, Maryland in 2011. Published by the National Institute of Standards and Technology it was simply titled “The NIST Definition of Cloud Computing.” If you attended tech conferences during that period, reciting some version of that definition was pretty much a requirement. The private, public, and hybrid cloud terms were in this document. So were concepts like on-demand self-service and resource pooling. As were the familiar Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) service models. 

NIST didn’t invent any of this out of whole cloth. But by packaging up a sort of industry and government consensus about the basics of cloud computing, they regularized and standardized that consensus. And, overall, it worked pretty well. Iaas was about provisioning  fundamental computing resources like processing, storage, and networks. SaaS was about providing applications to consumers.

As for PaaS? PaaS was about applications created using programming languages, libraries, services, and tools supported by the provider. 

Arguably, this PaaS definition was never as neat as the others. IaaS resources were easy to understand; they were like the resources you have on a server, except cloudier. And SaaS was just an app on the Web—application service providers (ASPs) reimagined, if you would. PaaS was sort of everything that was above infrastructure but below an application an end-user could run directly. Cloud-enabled middleware, hooks to add features to a single online service like Salesforce.com, single-purpose hosted programming environments (as Google App Engine and Azure were initially), and open extensible environments like OpenShift that could also be installed on-premise. Most fell broadly under the PaaS rubric. 

The NIST definition also didn’t really capture how the nature of the service model depends upon the audience to an extent. Thus, Salesforce.com is primarily a SaaS as far as the end-user is concerned but it’s a PaaS in the context of developers extending a CRM application. 

Today, I’d argue that the lines NIST drew probably still have some practical significance but the distinctions are increasingly frayed. IaaS platforms have almost universally moved beyond simple infrastructure. OpenStack has compute (Nova), storage (Swift and Cinder), and Networking (Neutron) components but it also includes database projects (Trove), identity management (Keystone), and the Heat orchestration engine to launch composite cloud applications. 

In many cases these higher-level functions can be either used standalone or replaced/complemented by more comprehensive alternatives. For example, in a hybrid cloud environment, a cloud management platform like Red Hat CloudForms (ManageIQ is the upstream project) provides multi-cloud management and sophisticated policy controls. The IaaS+ term is sometimes used to capture this idea of more than base-level infrastructure but less than a comprehensive developer platform.

In the case of SaaS, today’s APIs everywhere world means that most things with a UI also can be accessed programmatically in various ways. In other words, they’re platforms—however limited in scope and however tied to a single application.

But, really, the fraying is broader than that. I’ve argued previously that we’re in the process of shifting toward a new style of distributed application infrastructure and of developing applications for that infrastructure. It won’t happen immediately—hence, Gartner’s bimodal IT model—but it will happen. In the process, traditional specialties/silos (depending upon your perspective) are breaking down. This is true whether you’re talking enterprise buyers/influencers, IT organizations, industry analysts, or vendors. 

As a result, it's hard to separate PaaS--in the relatively narrow sense that it was first discussed--with the broader idea of an application development platform with middleware integration,messaging, mobile, etc. services. Red Hat's doing a lot of work to bridge those two worlds. For example, Red Hat’s JBoss Middleware portfolio of libraries, services, frameworks, and tools is widely used by developers to build enterprise applications, integrate applications and data, and automate business processes. With JBoss xPaaS Services for OpenShift, these same capabilities are being offered integrated with OpenShift. This lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.

The advantage of the xPaaS approach is that it doesn’t merely put middleware into the cloud in its traditional form. Rather, it effectively reimagines enterprise application development to enable faster, easier, and less error-prone provisioning and configuration for a more productive developer experience.Eventually all of the JBoss Middleware products will have xPaaS variants. In each case, the core product is exactly the same whether used in a traditional on-premise manner or as xPaaS, so apps can be moved seamlessly between environments. In the xPaaS environment, JBoss Middleware developers experience benefits from OpenShift-based user interface enhancements, automated configuration, and a more consistent experience across different middleware products.

Then DevOps [1] comes along to blur even more lines because it brings in a whole other set of, often, open source tooling including CICD (e.g. Jenkins), automation and configuration management (e.g. Ansible), collaboration, testing, monitoring, etc. These are increasingly part of that new distributed application platform as well as is the culture around iteration and collaboration that DevOps requires.

I have trouble not looking at this breakdown of historical taxonomies as a positive. It offers the possibility of more complete and better integrated application development platforms and more effective processes to use those platforms. It’s not the old siloed world any longer.

[1] I just published this white paper that gives my/Red Hat’s view of DevOps.

Photo credit: Flickr/cc https://www.flickr.com/photos/timbodon/2200884398

 

Friday, January 15, 2016

Why bimodal is a useful model

24079824410 007ff0b066 k

You hear a lot about “bimodal” IT these days. Gartner’s generally considered to have coined that specific term but similar concepts have percolated up from a number of places. Whatever you call it, the basic idea is this:

You have Classic IT that’s about attributes like stability, robustness, cost-effectiveness, and vertical scale. These attributes come through a combination of infrastructure and carefully controlled change management processes. IT has classically operated like this and it works well. Financial trades and transfers execute with precision, reliability, speed, and accuracy. The traditional enterprise resource planning systems operate consistently and rarely have a significant failure.

By contrast, cloud-native IT emphasizes adaptability and agility. Infrastructure is software-defined and therefore programmable through APIs. Cloud-native applications running on OpenStack infrastructure are loosely-coupled and distributed. In many cases, the dominant application design paradigm will come to be microservices — reusable single-function components communicating throughlightweight interfaces.

The argument for taking an explicit bimodal approach is essentially two-fold. 

On the one hand, organizations have to embrace “go fast” cloud-native platforms and practices going forward if they’re going to be able to use IT to help strategically differentiate themselves. And they increasingly have to. Apps. Software services. Robust online customer service. The list goes on. 

On the other hand, for most organizations with existing IT investments, it’s not realistic—too expensive, too disruptive—to just call in the bulldozers and start over from scratch. Yet it’s equally impractical to just treat IT as a uniform “timid middle” based on practices and approaches too fast for traditional business systems but too slow for fast-paced, experimental innovation.

That said, the model has its critics. In my view, most of these criticisms come from misunderstanding (willfully or otherwise) what a thoughtful application of this model is really about. So I’m going to take you through some of these criticisms and explain why I think they’re off the mark. 

Bimodal IT treats traditional IT as legacy and sets it up for failure.

This critique is probably the most common one I hear and, in all fairness, it’s partly because some of the nuances of the bimodal model aren’t always obvious. Gartner, at least, has always been explicit that Mode 1 (classic) IT needs to be renovated and modernized. Here’s just one quote from CSPs' Digital Business Requires a Bimodal IT Transformation Strategy, October 2014: "Modifying the existing IT infrastructure for an effective and efficient use, while maintaining a reliable IT environment, requires CIOs to implement incremental IT modernization." 

Modernization is indeed key to make the model work. Another Gartner note DevOps is the Bimodal Bridge (April 2015) notes: "DevOps is often thought of as an approach only applicable to a Mode 2 or nonlinear style of IT behavior. Yet there are key parts or patterns of DevOps that are equally applicable to Mode 1 IT organizations that enable DevOps to be a bridge between the two IT approaches.” 

Incrementally upgrading platforms (e.g. proprietary Unix to Linux) and modernizing application development practices are essential elements of a bimodal approach.

Bimodal IT is a crutch for lazy CIOs

Related to the above, this argument goes that bimodal IT gives CIOs a license not to aggressively pursue cloud native initiatives on the grounds that they can just argue that most of their IT needs to remain in it’s go-slow form. At least, as John Willis has put it, “I think a lot of folk think that mode 1 is the wrong message… :-)” or “I think also most feel (like me) that Bi-modal is a get out of jail free card for bad process/culture…"

Those points are fair, at least up to a point. But, Dave Roberts also made some points in the discussion that largely reflect my thinking as well. He notes that “Most of it [issues with bimodal] seems predicated on piss-poor management practices, which if you have those you’re screwed anyway.” He adds “If you want to be lazy, you will find a way. But that’s true regardless of model."

At the end of the day, I think what we’re seeing to a certain degree here is a debate between pragmatists and those who place a higher priority on moving fast even if doing so breaks things. I’m inclined to align with the pragmatists while acknowledging that part of pragmatism is recognizing when circumstances require breakage over taking measured steps. To give Dave the final word: “Obviously, use the model wisely. If your market requires speed on all fronts, then you need Mode 2 everywhere."

Bimodal is too simple

This is essentially the opposite argument. Bimodal doesn’t capture the complexity of IT.

The critique may be precise. For example, Simon Wardley argues that "When it comes to organising then each component not only needs different aptitudes (e.g. engineering + design) but also different attitudes (i.e. engineering in genesis is not the same as engineering in industrialised). To solve this, you end up implementing a "trimodal" (three party) structure such as pioneers, settlers and town planners which is governed by a process of theft." 

Alternatively, some of the criticism boils down to a more generic argument that IT is complex and heterogeneous and no general model can really capture that complexity and heterogeneity so we shouldn’t even try.

The value of a bimodal model

To this last point, I say that all models simplify and abstract but they’re no less useful for that. They suggest common patterns and common approaches. They’re not (or shouldn’t be) intended as rigid prescriptive frameworks that precisely describe messy realities but they may offer insights into moving those messy realities forward in a practical way.

Is bimodal the only possible model? Of course not. I’m not going to argue that, say, Pioneers/Settlers/Town Planners isn't an equally valid framework. If that, or something else, works for you go for it! All I can say is that a lot of IT executives I speak with find the two-speed IT lens a useful one because it resonates with their experiences and their requirements. 

 Which suggests to me that it’s a useful model for many IT organizations at this point in time. Just don’t forget that it is, after all, only a model and a guide and not a detailed roadmap to be followed slavishly.

Photo by Stephen Shankland. Used with permission. https://www.flickr.com/photos/shankrad/24079824410/ 

Thursday, January 07, 2016

IDC survey says: Go cloud-native but modernize too

IDC’s recent survey of “cloud native” early adopters tells us that existing applications and infrastructure aren’t going away. 83 percent expect to continue to support existing applications and infrastructure for the next three years. In fact, those who are furthest along in shifting to distributed, scale-out, microservices-based applications are twice as likely to say that they are going to take their time migrating than those who are less experienced with implementing cloud native applications and infrastructure. It’s easier to be optimistic when you haven’t been bloodied yet!

IDC conducted this survey of 301 North America and EMEA enterprises on Red Hat’s behalf; the results are published in an December 2015 IDC InfoBrief entitled Blending Cloud Native & Conventional Applications: 10 Lessons Learned from Early Adopters.

Screen Shot 2016 01 07 at 11 19 49 AM

It’s worth noting that even these cloud native early adopters plan to also modernize their existing conventional infrastructure. For example, in addition to the 51 percent continuing with their virtualization plans, 42 percent plan to migrate to software-defined storage/networking and to containerize applications currently running on virtual or physical servers. 

This is an important point. The bimodal IT concept—originally a Gartnerism but now used pretty widely to connote two speeds or two modes of IT delivery—is sometimes critiqued for a variety of reasons. (To be covered in a future post.) However, perhaps the most common criticism is that Mode 1 is a Get Out of Jail Free card for IT organizations wanting to just continue down a business as usual path. This survey shows that those furthest along in transitioning to cloud-native don’t see things that way at all. (It should be mentioned that Gartner doesn’t either and sees modernization as a key component of Mode 1.)

Open source was almost universally seen as playing a key role in any such strategy with 96 percent viewing open source as an enabler of cloud native integration and conventional app modernization. No surprises there. An earlier IDC survey on DevOps early adopters found a similar view of open source with respect to DevOps tooling.

The study also found that security and integration were important elements of a cloud native transition strategy. For example, 51 percent identified security, user access control, and compliance policies as a technical factor that would have have the greatest impact on their organization’s decisions about whether applications are best supported by conventional or cloud native architectures.

The #2 factor (42 percent) was the ability to support/integrate with existing databases and conventional applications--highlighting the need for management tools and process integration between new applications and existing workflows, apps, and data stores. Business Process Optimization was identified as an important integration element. Strategies included structured and unstructured data integration, business process automation, model-driven process management, and the use of an enterprise service bus and cloud APIs.

If I had to choose one word to convey the overall gestalt of the survey, I think I’d choose “pragmatic.” IDC surveyed cloud native early adopters, so these are relatively leading edge IT organizations. Yet, these same organizations also emphasized SLAs and minimizing business risks. They stress avoiding business and end-user disruption. They plan to transition gradually.  

Links for 01-07-2015

Wednesday, January 06, 2016

Beyond general purpose in servers

Broadwell  14nm Wafer Dark

Shortly before I checked out for the holidays, I had the pleasure to give a keynote at SDI Summit in Santa Clara, CA. The name might suggest an event all about software (SDI = software-defined infrastructure) but, in fact, the event had a pretty strong hardware flavor. The organizers, Conference Concepts, put on events like the Flash Memory Summit. 

As a result, I ended up having a lot more hardware-related discussions than I usually do at the events I attend. This included catching up with various industry analysts who I’ve known since the days I was an analyst myself and spent a lot of time looking at server hardware designs and the like. In any case, some of this back and forth started to crystallize some of my thoughts around how the server hardware landscape could start changing. Some of this is still rather speculative. However, my basic thesis is that software people are probably going to start thinking more about the platforms they’re running on rather than taking for granted that they’re racks of dual-socket x86 boxes. Boom. Done.

What follows are some of the trends/changes I think are worth keeping an eye on.

CMOS scaling limits

If this were a headline, it would probably be titled “The End of Moore’s Law,” but I’m not looking for the clicks. This is a complicated subject that I’m not going to be able to give its appropriate due here. However, it’s at the root of some other things I want to cover. 

Intel is shipping 14nm processors today (Broadwell and Skylake). It’s slipped the 10nm Cannonlake out to the second half of 2017. From there things get increasingly speculative: 7nm, 5nm, maybe 3nm.

There are various paths forward to pack in more transistors. It seems as if there’s a consensus developing around 3D stacking and other packaging improvements as a good near-ish term bet. Improved interconnects between chips is likely another area of interest. For a good read, I point you to Robert Colwell, presenting at Hot Chips in 2013 when he was Director of the Microsystems Technology Office at DARPA. 

However, Colwell also points out that from 1980 to 2010, clocks improved 3500X and micro architectural and other improvements contributed about another 50X performance boost. The process shrink marvel expressed by Moore’s Law (Observation) has overshadowed just about everything else. This is not to belittle in any way all the hard and smart engineering work that went into getting CMOS process technology to the point where it is today. But understand that CMOS has been a very special unicorn and an equivalent CMOS 2.0 isn’t likely to pop into existence anytime soon. 

Intel 10nm challenges1

Moore’s Law trumped all

There are doubtless macro effects stemming from processors not getting faster or memory not getting denser (at least as quickly as in the past), but I’m going to keep this focused on how this change could affect server designs.

When I was an analyst, we took lots of calls from vendors wanting to discuss their products. Some were looking for advice. Others just wanted us to write about them. In any case, we saw a fair number of specialty processors. Many were designed around some sort of massive-number-of-cores concept. At the time (roughly back half of the 2000s), there was a lot of interest in thread-level parallelism. Furthermore, fabs like TSMC were a good option for hardware startups wanting to design chips without having to manufacture them.

Almost universally, these startups didn’t make it. Part of it is just that, well, most startups don’t make it and the capital requirements for even fabless custom hardware are relatively high. However, there was also a pattern.

Even in the best case, these companies were fighting a relentless doubling of processor speed every 18 to 24 months from Intel (and sometimes AMD) on the back of enormous volume. So these companies didn’t just need to have a more optimized design than x86. They needed to be so much better that they could overcome the aforementioned x86 inertia while competing, on much lower volume, against someone improving at a rapid predictable space. It was a tough equation. 

I saw lots of other specialty designs too. FPGAs, GPU computing, special interconnect designs. Some of this has found takers in high performance computing, which as always been more willing to embrace the unusual in search of speed. However, in the main, the fact that Moore’s Law was going to correct any performance shortcomings in a generation or two made sticking with mainstream x86 an attractive default.

The rise of specialists

In Coswell’s aforementioned presentation, he argues that the “end of Moore’s Law revives special purpose designs.” (He adds the caveat to heed the lessons of the past and not to design unprogrammable engines.) Intel’s recent $16.7 billion acquisition of Altera can be seen as part of transition to a world in which we see more special purpose chips. As the linked WSJ article notes: "Microsoft and others, seeking faster performance for tasks like Web searches, have experimented with augmenting Intel’s processors with the kind of chips sold by Altera, known as FPGAs, or field programmable gate arrays. Intel’s first product priority after closing the Altera deal is to extend that concept."

Of course, CPUs have long been complemented by other types of processors for functions like networking and storage. However, the software-defined trend has been at least somewhat predicated on moving away from specialty hardware toward a standardized programmable substrate. (So, yes, there’s some irony in discussing these topics at an SDI Summit.)

I suspect that it’s just a tradeoff that we’ll have to live with. Some areas of acceleration will probably standardize and possibly even be folded into CPUs. Other types of specialty hardware will be used only when the performance benefits are compelling enough for a given application to be worth the additional effort. It’s also worth noting that the increased use of open source software means that end-user companies have far more options to modify applications and other code to use specialized hardware than when they were limited to proprietary vendors.  

ARM AArch64

Flagging ARM as another example of potential specialization is something of a no-brainer even if the degree and timing of the impact is TBD. ARM is clearly playing a big part in mobile. But there are reasons to think it may have a bigger role on servers than in the past. That it now supports 64-bit is huge because that's table stakes for most server designs today. However, almost as important, is that ARM vendors have been working to agree on certain standards.

As my colleague Jon Masters wrote when we released Red Hat Enterprise Linux Server for ARM Development Preview: "RHELSA DP targets industry standards that we have helped to drive for the past few years, including the ARM SBSA (Server Base System Architecture), and the ARM SBBR (Server Base Boot Requirements). These will collectively allow for a single 64-bit ARM Enterprise server Operating System image that supports the full range of compliant systems out of the box (as well as many future systems that have yet to be released through minor driver updates). “ (Press release.)

There are counter-arguments. x86 has a lot of inertia even if some of the contributors to that inertia like proprietary packaged software are less universally important than they were. And there’s lots of wreckage associated with past reduced-power servers both using ARM (Calxeda) and x86-compatible (Transmeta) designs.

But I’m certainly willing to entertain the argument that AArch64 is at least interesting for some segments in a way that past alternatives weren’t.

Parting thoughts

In the keynote I gave at SDI Summit, The New Distributed Application Infrastructure, I argued that we’re in a period of rapid transition from a longtime model built around long-lived applications installed in operating systems to one in which applications are far more componentized, abstracted, and dynamic. The hardware? Necessary but essentially commoditized.

That’s a fine starting point to think about where software-defined infrastructure is going. But I increasingly suspect that makes a simplifying assumption that increasingly won’t be the case. The operating system will help to abstract away changes and specializations in the hardware foundation as it has in the past. But that foundation will have to adjust to a reality that can’t depend on CMOS scaling to advance.

Tuesday, January 05, 2016

Getting from here to there: conventional and cloud-native

390311509 02f5d62b2b b

Before the holiday break, I wrote a series of posts over at the Red Hat Stack blog in which I added my thoughts about cloud native architectures, bridging those architectures with conventional applications, and some of the ways to think about transitioning between different architectural styles. My jumping off point was an IDC Analyst Connection in which Mary Johnson Turner and Gary Chen answered five questions about "Bridging the Gap Between Cloud-Native and Conventional Enterprise Applications." Below are those questions and the links to my posts:

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

http://redhatstackblog.redhat.com/2015/11/19/does-cloud-native-have-to-mean-all-in/

What are the typical challenges that organizations need to address as part of this evolution [to IT that at least includes a strong cloud-native component]?

http://redhatstackblog.redhat.com/2015/11/30/evolving-it-architectures-it-can-be-hard/

How will IT management skills, tools, and processes need to change [with the introduction of cloud-native architectures]?

http://redhatstackblog.redhat.com/2015/12/03/how-cloud-native-needs-cultural-change/

What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?

http://redhatstackblog.redhat.com/2015/12/09/why-cloud-native-depends-on-modernization/

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

http://redhatstackblog.redhat.com/2015/12/15/integrating-classic-it-with-cloud-native/

Photo credit: Flickr/CC Scott Robinson https://www.flickr.com/photos/clearlyambiguous/390311509 

Monday, January 04, 2016

What's up with Gordon in 2016?

First off, let me say that I’m not planning big changes although I’m sure my activities will continue to evolve as the market does. Red Hat’s doing interesting work in a diverse set of related areas and I’ll continue to evangelize those technologies, especially as they span multiple product sets. With that said, here’s how the year looks to be shaping up so far.

Travel and speaking. Last year broke a string of most-travel-ever years with airline mileage ending up “just” in the 60,000 mile range. This was partially because I didn’t make it to Asia last year but it was still a somewhat saner schedule overall. It remains to be seen what this year will bring but I’ll probably shoot for a similar level this year.

I already know I’ll be at Monkigras in London, ConfigMgmtCamp in Gent, CloudExpo in NYC, Interop in Vegas, and IEEE DevOps Unleashed in Santa Monica. I also typically attend a variety of Linux Foundation events, an O’Reilly event or two, Red Hat Summit (in SF this year), and VMworld (although I always say I won’t); I will probably do most of these this year as well. I may ramp things up a bit—especially for smaller gatherings—in my current areas of focus, specifically DevOps and IoT. This translates into DevOps Days and other events TBD.

If there’s some event that you think I should take a look at or would like me to speak, drop me a line. Note that I’m not directly involved with sponsorships, especially for large events, so if you’re really contacting me to ask for money, please save us both some time.

Writing. I have various papers in flight at the moment and need to map out what’s needed over the next six months or so. I also begin the year with my usual good intentions about blogging, which I was reasonably good about last year. My publishing schedule to this blog was down a bit but but I’ve also been writing for opensource.com, redhatstackblog.redhat.com, and openshift.com—as well as a variety of online publications.

You’re reading this on my "personal" blog. It's mostly (75%+) devoted to topics that fall generally under the umbrella of "tech." I generally keep the blog going with short link-comments when I'm not pushing out anything longer. The opinions expressed on this blog are mine alone and the content, including Red Hat-related content, is solely under my control. I’m also cross-posting to Medium when I feel it’s justified.

My biggest ambition this year is to publish a new book. This has been the subject of on-again, off-again mulling for the last 12 to 18 months or so. I began with the intent to just revise Computing Next to bring in containers and otherwise adapt the content to the frenetic change that’s going on in the IT industry today. However, as time went on, this approach made less and less sense. Too much was different and too many aspects required reframing.

Therefore, while I will probably repurpose some existing content, I’m going to start fresh. The pace of change still makes writing a book challenging but, given where we are with containers, xaaS, DevOps, IoT, etc. I’m hopeful that I can put something together that has some shelf life. My current plan is to shoot for something in the 100-120 page range (i.e. longer than pamphlet-style but shorter than a traditional trade paperback)  for completion by the summer. I’d really like to have it done by Red Hat Summit but we’ll see how likely this is. Working title is Phase Shift: Computing for the new Millennium and  it will focus on how new infrastructure, application development trends, mobile, and IoT all fit together. 

Podcasts. I grade myself about a B for podcasts last year. I think I had some good ones but wasn’t as aggressive about scheduling recording sessions as I could have been. I expect this year will end up similarly although I’m going to make an effort to bring in outside interview subjects on a variety of topics. I find 15 minute interviews are a good way to get interesting discussions out there without too much effort. (And I get them all transcribed for those who would prefer to read.)

Video. Video seems to be one thing that largely drops off my list. It’s takes a fair bit of work and I’ve struggled with how to use it in a way that adds value and is at least reasonably professional looking. It probably doesn’t help that I’m personally not big into watching videos when there are other sources of information. 

Social networks. I am active on twitter as @ghaff. As with this blog, I concentrate on tech topics but no guarantees that I won't get into other topics from time to time.

I mostly view LinkedIn as a sort of professional rolodex. If I've met you and you send me a LinkedIn invite, I'll probably accept though it might help to remind me who you are. I'm most likely to ignore you if the connection isn’t obvious, you send me a generic invite, and/or you appear to be just inviting everyone under the sun. I also post links to relevant blog posts when I remember.

I'm a pretty casual user of Facebook and I limit it to friend friends. That's not to say that some of them aren't professional acquaintances as well. But if you just met me at a conference somewhere and want to friend me, please understand if I ignore you.

I use Google+ primarily as an additional channel to draw attention to blogs and other material that I have created. I also participate in various conversations there. As with twitter, technology topics predominate on my Google+ stream.

I use flickr extensively for personal photography.

Links for 01-04-2016