Tuesday, April 26, 2016

DevOpsDays London 2016

Devopsdayslondon 1

April London was cool. But DevOpsDays London was hot and happening, selling out its venue in the shadow of St. Paul’s Cathedral. In many respects, it was a fairly typical DevOpsDays event with a focus on organization, process, and culture over individual products and toolchains. 

In other respects, it reflected the evolution of DevOps from something most associated with Silicon Valley “unicorns” to a core set of principles, processes, and practices that are broadly applicable. Also reflecting a location not far from the City of London, Barclays was a major sponsor and both financial services firms and major system integrators were well-represented in the audience and in the booths. 

With that as preamble, here are some of the discussions and other topics that caught my eye in one way or another during the course of the two-day event.

Metrics matter

As Splunk’s Andi Mann  observed in an open spaces discussion, it’s nice to measure the things that you do—but it’s even better to measure what you actually accomplish. And better still is to measure accomplishments that closely map to business outcomes rather than IT outputs. 

One participant noted that “We had all these metrics. 1100 of them. We ran a report every month. But why do these metrics matter? Will it help someone make a decision on a daily basis?” Another wryfully observed that "shipping crap quicker isn't a metric anyone should want to measure."

This led to further discussion about the distinction between metrics, alerts, and logs—something that was also touched on in some of the presentations. Google’s Jeromy Carriere pointed out that, in contrast to logs that enable root cause investigation, "alerts need to be exciting. If they're boring, automate them."

Enterprise DevOps

As I wrote above, there was a significant enterprise, even conservative enterprise, angle to the event. For example, Claire Agutter talked about how to “Agile your ITIL.” (I suspect there are Silicon Valley companies lacking a developer who even knows how to spell ITIL.) 

Claire observed that “the reason companies look away from ITIL is it looks bureaucratic” even though "it's how IT gets done in many organizations.” She pointed out that the issue is that ITIL has been implemented as a slow-moving waterfall process in many organizations. However, it doesn’t need to be and, in fact, the best way to think about ITIL process is simply that it’s a consistent way of doing things. And what’s a great match for a consistent way of doing things? That would be automation (using a tool such as Ansible.)

Bimodal IT?

Arguments about definitions and appropriate models often seem a bit “how many angels can dance on the head of a pin”-ish to me. I mostly felt that way when I was an analyst (and analysts generally love creating definitions and models) and I certainly feel that way now. That said, it seems to have become sufficiently trendy to bash Gartner’s bimodal IT model (see e.g. Kris Saxton’s "Bimodal IT:  and other snakeoil” from this event) that I feel compelled to respond. 

Most of what I think is worth saying I have already and won’t repeat here. But, really, Kris largely made my general point in his talk when he said: "A lot of people take away the headlines. The details are largely sane but [bimodal is] most problematic as a vision statement communicated from the C level.” I guess I have trouble seeing the problem with a largely descriptive model for enterprise IT that will inevitably be upgraded and replaced in pieces and at different rates. And CIOs who don’t bother to read beyond the headlines and latch onto this (or any other model) to justify simply maintaining the status quo? Well, that organization has bigger problems than a Gartner model that’s possibly insufficiently nuanced or visionary.

DevOpsSec

I led an open spaces discussion on best practices for security in a DevOps world especially when there are compliance and regulatory issues to consider. We actually ended up having two back-to-back security discussions; the one prior to mine focused on what “tolerate failure” means in a security/risk context. In practice, the discussions flowed into each other. In any case, the only issue was that so many people wanted to participate that it was a bit hard for everyone to pack themselves in!

The shared experiences around security were generally consistent with what I’ve heard in other discussions of this type. For example, there was a lot of interest in automated vulnerability scanning using tools such as OpenSCAP. Also mentioned was using human and machine-readable formats such as Ansible Playbooks to document processes and ensure that they’re followed consistently. (Alas, also consistent with other discussions was the familiar refrain that a lot of auditors are still not prepared to move beyond whatever paper-based checklists they’re already familiar with.)

My “the times they are a changin’” moment came though when someone piped up that he was one of those security guys that are often described as roadblocks to rapidly releasing software. He went on to add that this was the first conference he had ever attended that was not an explicit security conference and he was going to go back to his company and recommend that the security team attend more events of this type. This really highlighted just how siloed security processes can be while providing a hopeful illustration that DevOps really is starting to create new opportunities for collaboration and communication.

This last point is crucial. I know folks who get a bit grumpy about the degree to which DevOpsDays majors on culture rather than the cool tool du jour. Tech is important both as a platform and a toolchain for DevOps certainly. However, so many of us operate in an environment where it’s so natural to fixate on the latest shininess that it’s useful to be regularly reminded about the degree to which culture and more open organizations are even more fundamental components of digital transformation.

Monday, April 11, 2016

Connected Things 2016 recap

Screen Shot 2016 04 11 at 3 32 50 PM

The Internet-of-Things (IoT) and DevOps seem to be in a race to win the “most conferences/events” race. The IoT corner notched a pair last week with the Linux Foundation’s new OpenIoT Summit in San Diego and Connected Things 2016 put on by the MIT Enterprise Forum at the Media Lab in Cambridge.

I haven’t looked at the contents from the OpenIoT Summit but I do have thoughts from Connected Things that mostly reinforced everything else I see going on in the space.

Everyone’s talking.

This 500 person or so event sold out. This is clearly a hot topic and there’s a sense that it must be important. As we’ll see, the whats, the hows, the whys, and the the wherefores are a lot fuzzier. I’ve been through plenty of these new technology froths and I’m not sure I’ve ever seen quite such a mismatch between the hype and today’s more modest reality. No, hype’s not even quite right. It’s almost more of a utopian optimism about potential. Cue keynoter Rose, the author of Enchanted Objects: Design, Human Desire, and the Internet of Things. This is about cityscapes and intelligent spaces and the automation of the physical world.

But what is it?

At a high level, I think the definition or definitions are pretty straightforward. There’s an element of interfacing the physical world to the digital one. And there’s a big role for data—probably coupled with machine learning, real-time control, and machine-to-machine (M2M) communications. 

But how should we think about the market and where’s the value? Things get a lot murkier. 

(As I was writing this, an email literally popped into my account that read in part: "That brand new car that comes preloaded with a bunch of apps? Internet of Things. Those smart home devices that let you control the thermostat and play music with a few words? Internet of Things. That fitness tracker on your wrist that lets you tell your friends and family how your exercise is going? You get the point.” My point is that we have to refine our thinking to have useful discussions.)

At Connected Things, IDC’s Vernon Turner admitted that "It is a bit of a wrestling brawl to get a definition.” (For those who don’t know IDC, they’re an analyst firm that is in the business of defining and sizing markets so the fact that IDC is still trying to come to grips with various aspects of defining IoT is telling.) 

In general, the event organizers did make a gallant attempt to keep the sessions focused on specific problem classes and practical use cases but you were still left with the distinct feeling that the topic was coiled and ready to start zinging all over the place.

Data data everywhere. What do we do with it?

Data is central to IoT. Returning to Vernon from IDC again, he said that “By 2020, 44 zettabytes of content will be created (though not necessarily stored). We’ve never seen anything that scales at this magnitude before.” He also said that there will be a need for an "IoT gateway operating system where you aggregate the sensors in some meaningful way before you get the outcome." (I’d add at this point that Red Hat, like others, agrees that this sort of 3-tier architecture--edge, gateway, and cloud/datacenter—is going to generally be a good architecture for IoT.)

What’s less clear is how effectively we’ll make use of it given that we don’t use data very effectively today. McKinsey’s Michael Chui, on the same panel, noted that "less that 1% of the data collected is used for business purposes—but I expect an expansion of value over time in analytics.” I do expect more effective use of data over time. It’s probably encouraging that retail is leading manufacturing in IoT according to Vernon—given that retail was not a particular success story during the c. 1990s “data warehouse” version of better selling through analytics. 

Security matters—but how?

I’m tempted to just cut and paste the observations about security I made at the MassTLC IoT conference last year because, really, I’m not sure much has changed.

MIT’s Sanjay Sarma was downright pessimistic: “We have a disaster on our hands. We'll see a couple power plants go down. Security cannot be an afterthought. I'm terrified of this."

No one seemed to have great answers—at least at the edge device level. The footprints are small. Updates may not happen. (Though I had an interesting discussion with someone—forget who—at Linux Collaboration Summit last week who argued that they’re network devices; why shouldn’t they be updated?) Security may to be instantiated in the platform itself using the silicon as the secret. (John Walsh, President, Sypris Electronics). There was also some resignation that maybe walled gardens will have to be the answer. But what then about privacy? What then about portability?

There’s a utopian side to IoT. But there’s a dystopian side too.

Sunday, April 10, 2016

Building a garage hoist for my canoe

IMG 1306

A couple of weeks ago, I finally got around to putting together a system that could 1.) Get my canoe into my garage in the winter when there are two vehicles there, 2.) Allow one person to lift it into position, and 3.) Fit it around existing structures, hardware, and other stored items. I’d been storing it on a rack outside but, especially with Royalex no longer being made, I wanted to treat it with a little more care.

The trickiest part, as you can see from the first photo, was that there’s a relatively small three-dimensional volume to fit the 17 foot canoe into. It had to go front to back, clear the garage door opener and garage door, ideally not force me to move the sea kayak, and have room for my small car to slide in underneath. It did all work, but barely, and it meant that I needed to cinch it up fairly precisely.

To start with, I just installed a couple of pulleys to lift the boat, but a Tripper with outfitting weighs over 80 pounds and it was just too heavy to readily lift up and then cinch into precise position. 

Now you can deal with the weight problem by adding additional pulleys so that you’re pulling more rope with less force. However, it can be hard to get the canoe to pull up evenly and I could never get this working in a way that positioned the boat as precisely as I needed it to be.

IMG 1307

I next considered an electric winch. I went so far as to buy one and I think it would have worked but I was having trouble finding an ideal place to mount it and it seemed like overkill.

The solution I ended up with was a manual 600 pound winch that cost under $20 from Amazon. As you can see, two lines go up to a pair of pulleys. (I have overhead beams for storage on this side of the garage so I ended up just tying the pulleys to the existing beams.) One of the lines then heads down over the swivel pulley and is clipped into one cradle holding one end of the canoe. The other line goes through its pulley, which changes its direction 90 degrees to run to the other end of the canoe where a final pulley drops it down to be clipped into the other cradle. 

Don’t read too much into the exact pulleys I used. I had a couple laying around and I bought another couple of “clothesline” pulleys at Home Depot. I could probably have mounted the canoe right side up with just a sling but I think I was able to get it up a little higher this way. (It’s a tight fit; I guess if I ever get a bigger car, I’ll have to revisit this. The canoe gets transported on an SUV.)

IMG 1308  1

I’ll probably add a couple more clips to the system just to make it a little easier to position the cradles. And, before next winter, I’ll put a backup safety sling of some sort in place. But, overall this system seems to work very well. It takes very little effort to hoist the canoe into place and, once all the rope lengths and slings are properly adjusted, it’s very repeatable and straightforward. The canoe hangs down a bit lower than is ideal but that’s pretty much dictated by the garage door layout.

Wednesday, April 06, 2016

Specialists may have their day

Irving Wladawsky-Berger, who among other accomplishments led IBM’s early Linux efforts, has a great post up regarding special report on Moore’s Law in The Economist. Among the highlights:

Tissues, organs and organ systems have evolved in living organisms to organize cells so that together they can better carry out  a variety of common biological functions.  In mammals, organ systems include the cardiovascular, digestive, nervous, respiratory and reproductive systems, each of which is composed of multiple organs.

General purpose computers have long included separate architectures for their input/output functions.  Supercomputers have long relied on vector architectures to significantly accelerate the performance of numerically-intensive calculations.  Graphic processing units (GPUs) are used today in a number of high performance PCs, servers, and game consoles.  Most smartphones include a number of specialized chips for dealing with multimedia content, user interactions, security and other functions.  Neural network architectures are increasingly found in advanced AI systems.  

As in evolution, innovations in special-purpose chips and architectures will be increasingly important as Moore’s Law fades away.

I agree with Irving. When I was an analyst I saw specialized architectures largely fail because "why bother when Moore's Law would get you there in a year or two anyway?" I'm not sure the implications of losing the CMOS scaling lever are as widely appreciated as they should be. (The former head of DARPA microelectronics peg them at about about a 3500X improvement over the past couple of decades; you don't lose a lever like that and just go on with business as usual.)

This will have a lot of implications for software certainly. I also wonder about the broader implications of smaller, lighter, cheaper, faster increasingly no longer being a given.

I wrote about this in more detail after the SDISummit in Santa Clara at the end of last year.

Wednesday, March 30, 2016

Links for 03-30-2016

Monday, March 28, 2016

The "laptops" in my bag

I took one of my Chromebooks to an event last week and a couple of people asked me about it. So I thought I’d take the opportunity to talk about my larger devices as an extension to my recent “What’s in my bag?” post. 

In general, I carry a laptop-like device and a tablet-like device. Laptop-like devices are really a lot better for typing on and tablet-like devices are better for reading or watching video on a plane. And I haven’t found anything that really switches between the two modes well. I’m happy to stipulate that the Microsoft Surface Pro 4 may be that device for some people but I’m really not in the Microsoft ecosystem and longer so that doesn’t work for me at this point. 

More on tablet-like devices in a later post but, usually, my laptop-like device for travel is my 2015 13” MacBook Pro. Because it’s a full laptop, it’s the most versatile thing to take with me—especially if I might not always be connected. It weighs about 3.5 pounds and is .71 inches high. For me, this is about the perfect compromise for working at a desk and traveling. The smaller MacBook models are just a bit too small or tradeoff things like a second USB port that keep me from wanting to use day-in and day-out. I do value compactness and lightweight when I’m traveling but I find that, by the time you add chargers and dongles and various adapters, another pound or so of laptop just isn’t a big deal. This is still a very svelte laptop by any historical standard.

How about Chromebooks?

First, let me share my thoughts on Chromebooks in general and then I’ll get to a couple specific models. 

Chromebook are pretty awesome for the right use. At around $250 for many models, they’re a great match for doing web-based tasks (browsing and online office suites or even many software development tasks). You even get a hidden Crosh shell that gives you utilities like ssh. You’re not totally dead in the water if you go offline—for example, Evernote switches back and forth between connected and non-connected modes pretty smoothly—but they’re definitely oriented toward situations where you have reliable WiFi. (On the one hand, this is increasingly common; tech conferences notwithstanding. On the other hand, it’s also hard to do many things disconnected even if you have a full laptop.)

For $250, you’re not going to get high-resolution screens, backlit keyboards, or things like that. But my 13” Dell Chromebook from 2014 sits on a table downstairs in my house where I often find it more convenient for doing a lot of searching than using a tablet. (Yes, I could go find my laptop but I find it being “just there” handy.)

A variety of higher-priced Chromebooks out there have more features. Personally I get a lot less interested in a Chromebook as it gets to a ~$500 price point and beyond given that it won’t replace a laptop for most people.

More interesting from a travel perspective is a device like the Asus Chromebook Flip. It’s a 10.1” laptop that weighs about 2 pounds. The touch-sensitive screen also flips into a tablet mode. In my experience, it also has pretty reliably “all day” battery life which is probably a couple hours longer than my MacBook. If I don’t need more than a Chromebook and want to go lightweight, this is what I carry.

A few caveats:

Unlike my 4GB Dell, I have the 2GB memory Asus model—mostly because that’s all they had at the Best Buy when I needed something during a trip when I accidentally left my MacBook at home. It does stutter every now and then if there are multiple tabs open, so go with 4GB. 

The keyboard is fine, but it is small. I have no issue with using this as a travel laptop but I wouldn’t want to type with a keyboard this size all the time.

The tablet mode is “OK.” By that I mean it feels a bit weird having the keypad under your fingers when you’re holding the folded laptop though it can be used as an ebook reader in a pinch. I also don’t normally get movies and TV from Google Play so I don’t have a simple source for video content. This isn’t a problem with the device so much as the fact that Google is yet another separate ecosystem for content that you may or may not already be using.

So. For most work trips today, my MacBook Pro still usually wins. It’s just more versatile and I have access to a lot of presentations and other materials even if I’m not online. But, it’s not hard for me to imagine smaller (perhaps convertible in some way) devices becoming a more practical travel option over time.

Sunday, March 20, 2016

What's in my bag?

Inmybag

These pieces about travel gear seem to be popular and I travel a lot, so here you go. Nothing on clothes or footwear here but I cover most everything else.

I've previously written about my carry-ons. Depending upon the trip, I typically either bring an Osprey travel backback or a Patagonia over-the-shoulder/backpack-in-pinch. It really depends on how much schlepping I'll be doing. I have a variety of other bags for when I'm checking luggage or am not travelling by air, but those two cover at least 80 percent of my air trips.

I usually carry a Timbuk 2 messenger bag as my "personal piece" as the airlines like to refer to it. This is also my day-to-day "briefcase." Comfortable to carry, nice internal compartments, rugged as heck. You can also stuff a lot into it in a pinch. The main downside is that the material is heavy duty so doesn't stuff down as much as I'd like when I consolidate it into another bag. Nor does it make a particularly good "man purse" for walking around town; it's too big. So I have a couple of other fanny packs or over-the-shoulder bags I carry when I don't need something bigger (as I often don't with laptops more petite than they used to be).

I tend to switch various bags around from trip to trip, so one thing I've found important is to compartmentalize contents. I use two primary bags for this.

An Eagle Creek Pack-It Specter is made out a thin, light high-tech material. (Technically I guess it’s for toiletries.) For most trips I find this is perfect for holding:

  • Spare pair(s) of reading glasses in case
  • Bottle opener (plus USB key)--thanks OpenShift by Red Hat
  • Corkscrew
  • Small first aid/medical kit
  • Prescription medications
  • Spare contact stuff
  • Travel-specific electronic adapters: e.g. ethernet dongle and ethernet cable, international plugs, car power adapter
  • Small plug-in microphone for iPhone
  • An envelope or two
  • Small plastic bags
  • Earplugs
  • Very small notebook
  • Chapstick
  • Wetnaps
  • Earplugs

For a longer trip or one that needs  more of this miscellaneous gear than average, I have a second one of these bags that I can use or I consolidate this bag and bulkier discrete items into an Eagle Creek half-cube or something along those lines.

My day-to-day mesh ditty bag that holds all my electronic cables, etc.:

  • USB plug
  • USB auto "cigarette lighter" adapter
  • USB to XYZ (Lightning, micro-USB, etc.) adapter cables
  • Hands-free adapter for telephone
  • Good ear canal headphones (I use Klipsch E6 but I'll probably splurge for Bose noise canceling ones of these days)
  • External battery to charge phone. I have a Tumi that I was given. It's bigger than my other ones but it does hold a lot of juice so that's what I carry.
  • Business cards/case
  • Remote laptop control for presentations (I use a Logitech model)
  • Any necessary dongles for laptops. (I assume VGA output only unless I've been told otherwise. I do have an HDMI dongle for my primary laptop and a retractable HDMI cable to use with hotel TVs but I don't routinely bring those.
  • Plastic "spork"
  • Small LED headlamp
  • Pens

The retractable cables are nice although, if you look at the photo, you'll see it's a bit of a hodgepodge given that some of this is stuff I've picked up at tradeshows etc. Make sure that higher-current devices like tablets will actually charge using the parts you bring.
I've tried out Chromecast and travel routers for hotel rooms but I've given that up as being too associated with the pain that happens whenever you fiddle with networking gear.

Prescription glasses in a tough case

iPhone 6

Given said iPhone 6, I don't regularly carry my Canon S100 any longer even though it shoots raw pics and has an optical zoom. I do have both Fujifilm and Canon systems as well and I'll bring one or the other--usually the Fujifilm EX-1 (along with associated batteries and chargers)--if I'm doing more serious photography.

The requisite quart Ziplock for toiletries of course.

For a long time, I looked for a travel portfolio to carry my passport, spare cash, backup credit card, and various other info/cards that I like to have with me on most trips. I had a couple tall portfolios that were too big; you don't really need a portfolio that carries a sheef of airline tickets these days. I had a nice leather one I was given that was about the right size but it didn't zip up; I stopped using it after I lost some of its contents when they fell out one trip. I finally found one by Eagle Creek that is just right for me. (They don’t seem to make the one I have any longer; this looks like the current equivalent.)

Typically my MacBook Pro (but sometimes an Asus flip-top Chromebook) plus (usually) a tablet device of some sort whether a Kindle Paperwhite or an iPad 3.

(Mostly for longer trips) a thin 8 1/2 x 11" plastic portfolio to carry tickets, printed-out information, maps, etc. Yeah, a lot of this could be (and is) on my phone but I find carrying some paper backup to often be useful.

I usually just carry my regular wallet (leather, not a lot in it, put in a bag or a side pocket) though I do have various zippered wallets that hang around the neck or otherwise aren't in a pocket that I'll sometimes take for non-city trips.

Nylon (or whatever) reusable grocery bag. Weights nothing and more and more places are starting to charge for bags. Can be handy to organize stuff as well.

I have a small lightweight mesh laundry bag I often bring but almost any sort of bag will do.

Sometimes I pack either a foldable duffle or a foldable day pack for extra space on the return leg.

I'll close by noting that I don't typically bring everything listed here on a single trip and certainly not on the all-too-typical out-and-back to a hotel and office trip. That said, I do try to keep the "standard gear" relatively compartmentalized and ready to grab and go, even if I could trim it back a bit for a given trip. Other items that aren't part of my day-to-day routine I mostly keep in a box which I can go through if I'm going to take a longer/international/more varied trip.

Friday, March 18, 2016

DevOps Lessons presentation at IEEE DevOps events

Earlier this week I spoke at the IEEE DevOps for Hybrid Cloud event in "Silicon Beach," CA (aka El Segundo) at the Automobile Driving Museum. (Did I mention this is outside LA?) I've given variants on this talk before but I'm continually refining it. It seems to go over well although I'm always worried that I try to cover too much ground with it. In any case, we had a great audience. It was probably one of the most interactive and engaged crowds I've had in a while."

Here's the abstract:

Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.

Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:

  • Container-based platforms designed for modern application development and deployment.
  • The ability to design microservices-based applications using modular and reusable parts.
  • Iterative development, testing, and deployment using platform-as-a-service and integrated continuous delivery systems.

Monday, March 14, 2016

It's about team size: Not monolith vs. microservice

16080414598 35f2b36964 k

Basecamp’s David Heinemeier Hansson has written probably the most readable and balanced dissection of the monolith vs. microservices debate that I’ve run across. Go ahead and read it. A couple choice quotes:

Where things go astray [with microservices] is when people look at, say, Amazon or Google or whoever else might be commanding a fleet of services, and think, hey it works for The Most Successful, I’m sure it’ll work for me too. Bzzzzzzzzt!! Wrong!

DHH goes on to write that

Every time you extract a collaboration between objects to a collaboration between systems, you’re accepting a world of hurt with a myriad of liabilities and failure states. What to do when services are down, how to migrate in concert, and all the pain of running many services in the first place.

...all that pain is worth it when you have no choice. But most people do have a choice, and they do have an alternative. So allow me to present just one such choice: The Majestic Monolith!

The Majestic Monolith that DHH describes is essentially a well-architected (mostly) monolithic application that is well-understood by the individuals working on it. 

A point worth highlighting here. A team of about 12 programmers works on the Basecamp application described in this post. That’s not all that much bigger in team size that Amazon’s “two-pizza” team size which, in turn, is often equated with small, bounded context, single function teams that develop individual microservices. 

And that’s a key takeaway I think. I’m not sure this is, or should be, a debate about monoliths vs. microservices. Rather, in many cases, it’s a discussion about team coordination. Prematurely optimize into patterns based on tiny discrete services and you silo knowledge and create architectural complexity. Let individual applications grow too large—especially in the absence of of a common vision—and you get brittle and inflexible apps.

Either make components notionally independent of each other (microservices) or you’d better plan on efficiently coordinating changes. 

Links for 03-14-2016

Friday, March 04, 2016

At least one of our long national nightmares is over

Sco is finally dead parrot dead

An interesting piece of news crossed my desk (well, actually appeared in my browser) this week: The (presumably) final resolution of the entire SCO saga. If you missed it, that’s not entirely surprising. The long, sordid saga was effectively put to bed a long time ago when SCO lost some key court decisions and went bankrupt. However, there remained a complicated set of claims and counterclaims that were theoretically just dormant and could have been reanimated given a sufficiently bizarre set of circumstances. 

However, on February 26:

Plaintiff/Counterclaim-Defendant, The SCO Group, Inc. (“SCO”), and Defendant/CounterclaimPlaintiff,International Business Machines Corporation (“IBM”), jointly move for certification ofthe entry of final judgment on the Court’s orders concerning all of SCO’s claims, including the(a) Order filed on Feb. 5, 2016, granting IBM’s Motion for Partial Summary Judgment (DocketNo. 782), (b) Order filed on Feb. 8, 2016, granting IBM’s Motion for Partial Summary Judgment(Docket No. 783), (c) Partial Judgment Dismissing SCO Claims filed on July 10, 2013, and (d)Order filed on July 1, 2005, denying SCO’s Motion for Leave to File a Third AmendedComplaint (Docket No. 466). 

There’s more legalese but this would seem to be as much of a wrap as there ever is in the legal world.

I started covering this drama back in 2003 when SCO and their lawyers did their roadshow to industry analysts to show off the code that had been purportedly copied into Linux. (I was working at Illuminata at the time.) We wouldn’t sign their NDA but they showed us some code anyway and I ended up writing a research note “SCO’s Derived Case Against Linux.” I’m sure it got some of the details wrong but this was before it was particularly clear what was even being claimed. (Of course, that would remain a pattern throughout much of the case.)

I then ended up helping my colleague Jonathan Eunice write an expert witness report for IBM once those cases got rolling. I haven’t been able to discuss that fact or anything else about the case while the claims and counterclaims remained open. It was a busy number of months working on that report. In all, it was a fascinating experience although one I’m not sure I would want to make a practice of. It also gave me an appreciation for why lawsuits like these are so incredibly expensive. 

Unfortunately, the expert witness reports remain under court seal and that’s unlikely to change. That’s a bit frustrating both because I think we did some good work that ended up not really being used and because there’s a lot of historical information about the claims SCO made that will probably never see the light of day. But, in any case, I still can’t say too much about the details that I know.

The whole set of cases was such a weird trip down the rabbit hole. Probably the confusion over who owned the UNIX copyrights is Exhibit A. Wouldn’t you have thought the executives involved with the supposed sale would have remembered and that the contract would have been crystal clear on this basic point? One would but this is the SCO saga we’re talking about. 

It’s hard to argue that the SCO cases hurt open source and Linux. Perhaps they slowed down adoption in some circles. But the fact that Linux made it through what, at one time, looked to be a serious threat perhaps even strengthened it in the long run. 

Tuesday, March 01, 2016

2016 MIT Sloan CIO Symposium

Logo 2015

I just received a notice for this year’s MIT Sloan CIO Symposium that’s happening at MIT on May 18. I’ve covered it as press for a number of years; here’s my story from last year. It always has good speakers (with a nice mix of business and academic)—as well as panels that are better than the norm at conferences. 

This year’s theme is “Thriving in the Digital Economy” with topics including:

  • Impact digital has on the nature of work, the workplace, and innovation
  • Big Data 2.0 [1] and Data’s Strategic Role
  • Platform Strategies, IoT, Cybersecurity, and Blockchain

I’m particularly interested in the blockchain session which Julio Faura of Santander is giving.

A call for applications for the Innovation Showcase which will feature 10 early-stage companies that are providing "innovative and valuable IT solutions” at the event is also now open. Deadline for submissions is March 26. 

[1] I’m not so sure we ever really achieved Big Data 1.0, but I digress.

Links for 03-01-2016

Friday, February 26, 2016

Thursday, February 11, 2016

Links for 02-11-2016

Friday, January 22, 2016

Book Review: Cooking for Geeks, Second Edition

As a single book, this combination of food science, recipes, equipment, ingredients, experimentation, interviews, and geeky humor is hard to beat. It’s not necessarily deep in all those areas, but it’s an appealing total package for those curious about the why’s of food. 

It’s the second edition of this book by Jeff Porter. At 488 pages, it’s about 50 pages longer than its predecessor. There are new interviews and graphics along with a fair bit of updating and rearranging from the prior edition—although the overall, look, feel, and organization aren’t a major departure. 

The book is written in a lighthearted and gently humorous way. Random sample from the intro to Taste, Smell, and Flavor: “You open your fridge and see pickles, strawberries, and tortillas. What do you do? You might answer: create a pickle/strawberry burrito. Or if you’re less adventurous, you might say: order pizza. But somewhere between making a gross-sounding burrito and ordering takeout is another option: figuring out the answer to one of life’s deeper questions: How do I know what goes together?” Probably not to everyone’s taste I realize, but it works for me.

It covers a broad swath of the science. The aforementioned tastes, smells, and flavors. Time and temperature—and what those mean for cooking proteins and melting fats. Additives like emulsifiers and thickening agents. Air, water, and leavening agents.It’s not the science tome that is Harold McGee’s On Food and Cooking, but it’s a more easily accessible presentation. (Though, if you read this book and enjoy it, by all means pick up McGee and vice versa.)

Cooking for Geeks at least touches on most of the major modernist cooking practices including sous vide and practical tips for same. Arguably, some of the DIY material around sous vide is a bit dated given the price drops of modern immersion circulators but this is about experimentation after all. (The link in the book does go to a list of current equipment options though.) There are also interviews with many of the usual suspects in that space such as Nathan Myhrvold and Dave Arnold.

Is this book for the cooking-curious geek who doesn’t have much real cooking experience? It could be but they might want to pair this book with another that was more focused on basic cooking techniques. The recipes here are relatively straightforward and the instructions are clear, but there’s not a lot of photography devoted to the recipes and the instructions for things like Béchamel Sauce are probably a bit bare-bones for a first-timer. 

I’d also generally note that the recipes are often there to provide examples of the science discussion. There isn’t a lot of discussion about why this specific recipe is being made with this specific set of techniques. For that sort of thing, I recommend book(s) from the America’s Test Kitchen empire, perhaps starting with their The New Best Recipes book—which also has the virtue of being a pretty comprehensive set of basic and not-so-basic recipes. It’s also rather sober and by-the-numbers, a much different experience. (Alton Brown also seems to have his followers in geeky circles although I’ve never personally been all that enthusiastic.)

One final point is that, for many, this is a book you will flip through and alight on a topic of interest. It’s not that you couldn’t read it from cover to cover, but the many sidebars and interviews and short chunks of material seem to encourage non-linear exploration. 

Bottom line: 5/5. Highly recommended for anyone with an interest in the science of cooking even if they don’t want to get into deep chemistry and physics.

Disclaimer: This book was provided to me as a review copy but this review represents my honest assessment.

Links for 01-22-2016

The new distributed application development platform: Breaking down silos

2200884398 7d9fd616a0 o

A document came out of Gaithersburg, Maryland in 2011. Published by the National Institute of Standards and Technology it was simply titled “The NIST Definition of Cloud Computing.” If you attended tech conferences during that period, reciting some version of that definition was pretty much a requirement. The private, public, and hybrid cloud terms were in this document. So were concepts like on-demand self-service and resource pooling. As were the familiar Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) service models. 

NIST didn’t invent any of this out of whole cloth. But by packaging up a sort of industry and government consensus about the basics of cloud computing, they regularized and standardized that consensus. And, overall, it worked pretty well. Iaas was about provisioning  fundamental computing resources like processing, storage, and networks. SaaS was about providing applications to consumers.

As for PaaS? PaaS was about applications created using programming languages, libraries, services, and tools supported by the provider. 

Arguably, this PaaS definition was never as neat as the others. IaaS resources were easy to understand; they were like the resources you have on a server, except cloudier. And SaaS was just an app on the Web—application service providers (ASPs) reimagined, if you would. PaaS was sort of everything that was above infrastructure but below an application an end-user could run directly. Cloud-enabled middleware, hooks to add features to a single online service like Salesforce.com, single-purpose hosted programming environments (as Google App Engine and Azure were initially), and open extensible environments like OpenShift that could also be installed on-premise. Most fell broadly under the PaaS rubric. 

The NIST definition also didn’t really capture how the nature of the service model depends upon the audience to an extent. Thus, Salesforce.com is primarily a SaaS as far as the end-user is concerned but it’s a PaaS in the context of developers extending a CRM application. 

Today, I’d argue that the lines NIST drew probably still have some practical significance but the distinctions are increasingly frayed. IaaS platforms have almost universally moved beyond simple infrastructure. OpenStack has compute (Nova), storage (Swift and Cinder), and Networking (Neutron) components but it also includes database projects (Trove), identity management (Keystone), and the Heat orchestration engine to launch composite cloud applications. 

In many cases these higher-level functions can be either used standalone or replaced/complemented by more comprehensive alternatives. For example, in a hybrid cloud environment, a cloud management platform like Red Hat CloudForms (ManageIQ is the upstream project) provides multi-cloud management and sophisticated policy controls. The IaaS+ term is sometimes used to capture this idea of more than base-level infrastructure but less than a comprehensive developer platform.

In the case of SaaS, today’s APIs everywhere world means that most things with a UI also can be accessed programmatically in various ways. In other words, they’re platforms—however limited in scope and however tied to a single application.

But, really, the fraying is broader than that. I’ve argued previously that we’re in the process of shifting toward a new style of distributed application infrastructure and of developing applications for that infrastructure. It won’t happen immediately—hence, Gartner’s bimodal IT model—but it will happen. In the process, traditional specialties/silos (depending upon your perspective) are breaking down. This is true whether you’re talking enterprise buyers/influencers, IT organizations, industry analysts, or vendors. 

As a result, it's hard to separate PaaS--in the relatively narrow sense that it was first discussed--with the broader idea of an application development platform with middleware integration,messaging, mobile, etc. services. Red Hat's doing a lot of work to bridge those two worlds. For example, Red Hat’s JBoss Middleware portfolio of libraries, services, frameworks, and tools is widely used by developers to build enterprise applications, integrate applications and data, and automate business processes. With JBoss xPaaS Services for OpenShift, these same capabilities are being offered integrated with OpenShift. This lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.

The advantage of the xPaaS approach is that it doesn’t merely put middleware into the cloud in its traditional form. Rather, it effectively reimagines enterprise application development to enable faster, easier, and less error-prone provisioning and configuration for a more productive developer experience.Eventually all of the JBoss Middleware products will have xPaaS variants. In each case, the core product is exactly the same whether used in a traditional on-premise manner or as xPaaS, so apps can be moved seamlessly between environments. In the xPaaS environment, JBoss Middleware developers experience benefits from OpenShift-based user interface enhancements, automated configuration, and a more consistent experience across different middleware products.

Then DevOps [1] comes along to blur even more lines because it brings in a whole other set of, often, open source tooling including CICD (e.g. Jenkins), automation and configuration management (e.g. Ansible), collaboration, testing, monitoring, etc. These are increasingly part of that new distributed application platform as well as is the culture around iteration and collaboration that DevOps requires.

I have trouble not looking at this breakdown of historical taxonomies as a positive. It offers the possibility of more complete and better integrated application development platforms and more effective processes to use those platforms. It’s not the old siloed world any longer.

[1] I just published this white paper that gives my/Red Hat’s view of DevOps.

Photo credit: Flickr/cc https://www.flickr.com/photos/timbodon/2200884398

 

Friday, January 15, 2016

Why bimodal is a useful model

24079824410 007ff0b066 k

You hear a lot about “bimodal” IT these days. Gartner’s generally considered to have coined that specific term but similar concepts have percolated up from a number of places. Whatever you call it, the basic idea is this:

You have Classic IT that’s about attributes like stability, robustness, cost-effectiveness, and vertical scale. These attributes come through a combination of infrastructure and carefully controlled change management processes. IT has classically operated like this and it works well. Financial trades and transfers execute with precision, reliability, speed, and accuracy. The traditional enterprise resource planning systems operate consistently and rarely have a significant failure.

By contrast, cloud-native IT emphasizes adaptability and agility. Infrastructure is software-defined and therefore programmable through APIs. Cloud-native applications running on OpenStack infrastructure are loosely-coupled and distributed. In many cases, the dominant application design paradigm will come to be microservices — reusable single-function components communicating throughlightweight interfaces.

The argument for taking an explicit bimodal approach is essentially two-fold. 

On the one hand, organizations have to embrace “go fast” cloud-native platforms and practices going forward if they’re going to be able to use IT to help strategically differentiate themselves. And they increasingly have to. Apps. Software services. Robust online customer service. The list goes on. 

On the other hand, for most organizations with existing IT investments, it’s not realistic—too expensive, too disruptive—to just call in the bulldozers and start over from scratch. Yet it’s equally impractical to just treat IT as a uniform “timid middle” based on practices and approaches too fast for traditional business systems but too slow for fast-paced, experimental innovation.

That said, the model has its critics. In my view, most of these criticisms come from misunderstanding (willfully or otherwise) what a thoughtful application of this model is really about. So I’m going to take you through some of these criticisms and explain why I think they’re off the mark. 

Bimodal IT treats traditional IT as legacy and sets it up for failure.

This critique is probably the most common one I hear and, in all fairness, it’s partly because some of the nuances of the bimodal model aren’t always obvious. Gartner, at least, has always been explicit that Mode 1 (classic) IT needs to be renovated and modernized. Here’s just one quote from CSPs' Digital Business Requires a Bimodal IT Transformation Strategy, October 2014: "Modifying the existing IT infrastructure for an effective and efficient use, while maintaining a reliable IT environment, requires CIOs to implement incremental IT modernization." 

Modernization is indeed key to make the model work. Another Gartner note DevOps is the Bimodal Bridge (April 2015) notes: "DevOps is often thought of as an approach only applicable to a Mode 2 or nonlinear style of IT behavior. Yet there are key parts or patterns of DevOps that are equally applicable to Mode 1 IT organizations that enable DevOps to be a bridge between the two IT approaches.” 

Incrementally upgrading platforms (e.g. proprietary Unix to Linux) and modernizing application development practices are essential elements of a bimodal approach.

Bimodal IT is a crutch for lazy CIOs

Related to the above, this argument goes that bimodal IT gives CIOs a license not to aggressively pursue cloud native initiatives on the grounds that they can just argue that most of their IT needs to remain in it’s go-slow form. At least, as John Willis has put it, “I think a lot of folk think that mode 1 is the wrong message… :-)” or “I think also most feel (like me) that Bi-modal is a get out of jail free card for bad process/culture…"

Those points are fair, at least up to a point. But, Dave Roberts also made some points in the discussion that largely reflect my thinking as well. He notes that “Most of it [issues with bimodal] seems predicated on piss-poor management practices, which if you have those you’re screwed anyway.” He adds “If you want to be lazy, you will find a way. But that’s true regardless of model."

At the end of the day, I think what we’re seeing to a certain degree here is a debate between pragmatists and those who place a higher priority on moving fast even if doing so breaks things. I’m inclined to align with the pragmatists while acknowledging that part of pragmatism is recognizing when circumstances require breakage over taking measured steps. To give Dave the final word: “Obviously, use the model wisely. If your market requires speed on all fronts, then you need Mode 2 everywhere."

Bimodal is too simple

This is essentially the opposite argument. Bimodal doesn’t capture the complexity of IT.

The critique may be precise. For example, Simon Wardley argues that "When it comes to organising then each component not only needs different aptitudes (e.g. engineering + design) but also different attitudes (i.e. engineering in genesis is not the same as engineering in industrialised). To solve this, you end up implementing a "trimodal" (three party) structure such as pioneers, settlers and town planners which is governed by a process of theft." 

Alternatively, some of the criticism boils down to a more generic argument that IT is complex and heterogeneous and no general model can really capture that complexity and heterogeneity so we shouldn’t even try.

The value of a bimodal model

To this last point, I say that all models simplify and abstract but they’re no less useful for that. They suggest common patterns and common approaches. They’re not (or shouldn’t be) intended as rigid prescriptive frameworks that precisely describe messy realities but they may offer insights into moving those messy realities forward in a practical way.

Is bimodal the only possible model? Of course not. I’m not going to argue that, say, Pioneers/Settlers/Town Planners isn't an equally valid framework. If that, or something else, works for you go for it! All I can say is that a lot of IT executives I speak with find the two-speed IT lens a useful one because it resonates with their experiences and their requirements. 

 Which suggests to me that it’s a useful model for many IT organizations at this point in time. Just don’t forget that it is, after all, only a model and a guide and not a detailed roadmap to be followed slavishly.

Photo by Stephen Shankland. Used with permission. https://www.flickr.com/photos/shankrad/24079824410/ 

Thursday, January 07, 2016

IDC survey says: Go cloud-native but modernize too

IDC’s recent survey of “cloud native” early adopters tells us that existing applications and infrastructure aren’t going away. 83 percent expect to continue to support existing applications and infrastructure for the next three years. In fact, those who are furthest along in shifting to distributed, scale-out, microservices-based applications are twice as likely to say that they are going to take their time migrating than those who are less experienced with implementing cloud native applications and infrastructure. It’s easier to be optimistic when you haven’t been bloodied yet!

IDC conducted this survey of 301 North America and EMEA enterprises on Red Hat’s behalf; the results are published in an December 2015 IDC InfoBrief entitled Blending Cloud Native & Conventional Applications: 10 Lessons Learned from Early Adopters.

Screen Shot 2016 01 07 at 11 19 49 AM

It’s worth noting that even these cloud native early adopters plan to also modernize their existing conventional infrastructure. For example, in addition to the 51 percent continuing with their virtualization plans, 42 percent plan to migrate to software-defined storage/networking and to containerize applications currently running on virtual or physical servers. 

This is an important point. The bimodal IT concept—originally a Gartnerism but now used pretty widely to connote two speeds or two modes of IT delivery—is sometimes critiqued for a variety of reasons. (To be covered in a future post.) However, perhaps the most common criticism is that Mode 1 is a Get Out of Jail Free card for IT organizations wanting to just continue down a business as usual path. This survey shows that those furthest along in transitioning to cloud-native don’t see things that way at all. (It should be mentioned that Gartner doesn’t either and sees modernization as a key component of Mode 1.)

Open source was almost universally seen as playing a key role in any such strategy with 96 percent viewing open source as an enabler of cloud native integration and conventional app modernization. No surprises there. An earlier IDC survey on DevOps early adopters found a similar view of open source with respect to DevOps tooling.

The study also found that security and integration were important elements of a cloud native transition strategy. For example, 51 percent identified security, user access control, and compliance policies as a technical factor that would have have the greatest impact on their organization’s decisions about whether applications are best supported by conventional or cloud native architectures.

The #2 factor (42 percent) was the ability to support/integrate with existing databases and conventional applications--highlighting the need for management tools and process integration between new applications and existing workflows, apps, and data stores. Business Process Optimization was identified as an important integration element. Strategies included structured and unstructured data integration, business process automation, model-driven process management, and the use of an enterprise service bus and cloud APIs.

If I had to choose one word to convey the overall gestalt of the survey, I think I’d choose “pragmatic.” IDC surveyed cloud native early adopters, so these are relatively leading edge IT organizations. Yet, these same organizations also emphasized SLAs and minimizing business risks. They stress avoiding business and end-user disruption. They plan to transition gradually.  

Links for 01-07-2015

Wednesday, January 06, 2016

Beyond general purpose in servers

Broadwell  14nm Wafer Dark

Shortly before I checked out for the holidays, I had the pleasure to give a keynote at SDI Summit in Santa Clara, CA. The name might suggest an event all about software (SDI = software-defined infrastructure) but, in fact, the event had a pretty strong hardware flavor. The organizers, Conference Concepts, put on events like the Flash Memory Summit. 

As a result, I ended up having a lot more hardware-related discussions than I usually do at the events I attend. This included catching up with various industry analysts who I’ve known since the days I was an analyst myself and spent a lot of time looking at server hardware designs and the like. In any case, some of this back and forth started to crystallize some of my thoughts around how the server hardware landscape could start changing. Some of this is still rather speculative. However, my basic thesis is that software people are probably going to start thinking more about the platforms they’re running on rather than taking for granted that they’re racks of dual-socket x86 boxes. Boom. Done.

What follows are some of the trends/changes I think are worth keeping an eye on.

CMOS scaling limits

If this were a headline, it would probably be titled “The End of Moore’s Law,” but I’m not looking for the clicks. This is a complicated subject that I’m not going to be able to give its appropriate due here. However, it’s at the root of some other things I want to cover. 

Intel is shipping 14nm processors today (Broadwell and Skylake). It’s slipped the 10nm Cannonlake out to the second half of 2017. From there things get increasingly speculative: 7nm, 5nm, maybe 3nm.

There are various paths forward to pack in more transistors. It seems as if there’s a consensus developing around 3D stacking and other packaging improvements as a good near-ish term bet. Improved interconnects between chips is likely another area of interest. For a good read, I point you to Robert Colwell, presenting at Hot Chips in 2013 when he was Director of the Microsystems Technology Office at DARPA. 

However, Colwell also points out that from 1980 to 2010, clocks improved 3500X and micro architectural and other improvements contributed about another 50X performance boost. The process shrink marvel expressed by Moore’s Law (Observation) has overshadowed just about everything else. This is not to belittle in any way all the hard and smart engineering work that went into getting CMOS process technology to the point where it is today. But understand that CMOS has been a very special unicorn and an equivalent CMOS 2.0 isn’t likely to pop into existence anytime soon. 

Intel 10nm challenges1

Moore’s Law trumped all

There are doubtless macro effects stemming from processors not getting faster or memory not getting denser (at least as quickly as in the past), but I’m going to keep this focused on how this change could affect server designs.

When I was an analyst, we took lots of calls from vendors wanting to discuss their products. Some were looking for advice. Others just wanted us to write about them. In any case, we saw a fair number of specialty processors. Many were designed around some sort of massive-number-of-cores concept. At the time (roughly back half of the 2000s), there was a lot of interest in thread-level parallelism. Furthermore, fabs like TSMC were a good option for hardware startups wanting to design chips without having to manufacture them.

Almost universally, these startups didn’t make it. Part of it is just that, well, most startups don’t make it and the capital requirements for even fabless custom hardware are relatively high. However, there was also a pattern.

Even in the best case, these companies were fighting a relentless doubling of processor speed every 18 to 24 months from Intel (and sometimes AMD) on the back of enormous volume. So these companies didn’t just need to have a more optimized design than x86. They needed to be so much better that they could overcome the aforementioned x86 inertia while competing, on much lower volume, against someone improving at a rapid predictable space. It was a tough equation. 

I saw lots of other specialty designs too. FPGAs, GPU computing, special interconnect designs. Some of this has found takers in high performance computing, which as always been more willing to embrace the unusual in search of speed. However, in the main, the fact that Moore’s Law was going to correct any performance shortcomings in a generation or two made sticking with mainstream x86 an attractive default.

The rise of specialists

In Coswell’s aforementioned presentation, he argues that the “end of Moore’s Law revives special purpose designs.” (He adds the caveat to heed the lessons of the past and not to design unprogrammable engines.) Intel’s recent $16.7 billion acquisition of Altera can be seen as part of transition to a world in which we see more special purpose chips. As the linked WSJ article notes: "Microsoft and others, seeking faster performance for tasks like Web searches, have experimented with augmenting Intel’s processors with the kind of chips sold by Altera, known as FPGAs, or field programmable gate arrays. Intel’s first product priority after closing the Altera deal is to extend that concept."

Of course, CPUs have long been complemented by other types of processors for functions like networking and storage. However, the software-defined trend has been at least somewhat predicated on moving away from specialty hardware toward a standardized programmable substrate. (So, yes, there’s some irony in discussing these topics at an SDI Summit.)

I suspect that it’s just a tradeoff that we’ll have to live with. Some areas of acceleration will probably standardize and possibly even be folded into CPUs. Other types of specialty hardware will be used only when the performance benefits are compelling enough for a given application to be worth the additional effort. It’s also worth noting that the increased use of open source software means that end-user companies have far more options to modify applications and other code to use specialized hardware than when they were limited to proprietary vendors.  

ARM AArch64

Flagging ARM as another example of potential specialization is something of a no-brainer even if the degree and timing of the impact is TBD. ARM is clearly playing a big part in mobile. But there are reasons to think it may have a bigger role on servers than in the past. That it now supports 64-bit is huge because that's table stakes for most server designs today. However, almost as important, is that ARM vendors have been working to agree on certain standards.

As my colleague Jon Masters wrote when we released Red Hat Enterprise Linux Server for ARM Development Preview: "RHELSA DP targets industry standards that we have helped to drive for the past few years, including the ARM SBSA (Server Base System Architecture), and the ARM SBBR (Server Base Boot Requirements). These will collectively allow for a single 64-bit ARM Enterprise server Operating System image that supports the full range of compliant systems out of the box (as well as many future systems that have yet to be released through minor driver updates). “ (Press release.)

There are counter-arguments. x86 has a lot of inertia even if some of the contributors to that inertia like proprietary packaged software are less universally important than they were. And there’s lots of wreckage associated with past reduced-power servers both using ARM (Calxeda) and x86-compatible (Transmeta) designs.

But I’m certainly willing to entertain the argument that AArch64 is at least interesting for some segments in a way that past alternatives weren’t.

Parting thoughts

In the keynote I gave at SDI Summit, The New Distributed Application Infrastructure, I argued that we’re in a period of rapid transition from a longtime model built around long-lived applications installed in operating systems to one in which applications are far more componentized, abstracted, and dynamic. The hardware? Necessary but essentially commoditized.

That’s a fine starting point to think about where software-defined infrastructure is going. But I increasingly suspect that makes a simplifying assumption that increasingly won’t be the case. The operating system will help to abstract away changes and specializations in the hardware foundation as it has in the past. But that foundation will have to adjust to a reality that can’t depend on CMOS scaling to advance.

Tuesday, January 05, 2016

Getting from here to there: conventional and cloud-native

390311509 02f5d62b2b b

Before the holiday break, I wrote a series of posts over at the Red Hat Stack blog in which I added my thoughts about cloud native architectures, bridging those architectures with conventional applications, and some of the ways to think about transitioning between different architectural styles. My jumping off point was an IDC Analyst Connection in which Mary Johnson Turner and Gary Chen answered five questions about "Bridging the Gap Between Cloud-Native and Conventional Enterprise Applications." Below are those questions and the links to my posts:

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

http://redhatstackblog.redhat.com/2015/11/19/does-cloud-native-have-to-mean-all-in/

What are the typical challenges that organizations need to address as part of this evolution [to IT that at least includes a strong cloud-native component]?

http://redhatstackblog.redhat.com/2015/11/30/evolving-it-architectures-it-can-be-hard/

How will IT management skills, tools, and processes need to change [with the introduction of cloud-native architectures]?

http://redhatstackblog.redhat.com/2015/12/03/how-cloud-native-needs-cultural-change/

What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?

http://redhatstackblog.redhat.com/2015/12/09/why-cloud-native-depends-on-modernization/

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

http://redhatstackblog.redhat.com/2015/12/15/integrating-classic-it-with-cloud-native/

Photo credit: Flickr/CC Scott Robinson https://www.flickr.com/photos/clearlyambiguous/390311509 

Monday, January 04, 2016

What's up with Gordon in 2016?

First off, let me say that I’m not planning big changes although I’m sure my activities will continue to evolve as the market does. Red Hat’s doing interesting work in a diverse set of related areas and I’ll continue to evangelize those technologies, especially as they span multiple product sets. With that said, here’s how the year looks to be shaping up so far.

Travel and speaking. Last year broke a string of most-travel-ever years with airline mileage ending up “just” in the 60,000 mile range. This was partially because I didn’t make it to Asia last year but it was still a somewhat saner schedule overall. It remains to be seen what this year will bring but I’ll probably shoot for a similar level this year.

I already know I’ll be at Monkigras in London, ConfigMgmtCamp in Gent, CloudExpo in NYC, Interop in Vegas, and IEEE DevOps Unleashed in Santa Monica. I also typically attend a variety of Linux Foundation events, an O’Reilly event or two, Red Hat Summit (in SF this year), and VMworld (although I always say I won’t); I will probably do most of these this year as well. I may ramp things up a bit—especially for smaller gatherings—in my current areas of focus, specifically DevOps and IoT. This translates into DevOps Days and other events TBD.

If there’s some event that you think I should take a look at or would like me to speak, drop me a line. Note that I’m not directly involved with sponsorships, especially for large events, so if you’re really contacting me to ask for money, please save us both some time.

Writing. I have various papers in flight at the moment and need to map out what’s needed over the next six months or so. I also begin the year with my usual good intentions about blogging, which I was reasonably good about last year. My publishing schedule to this blog was down a bit but but I’ve also been writing for opensource.com, redhatstackblog.redhat.com, and openshift.com—as well as a variety of online publications.

You’re reading this on my "personal" blog. It's mostly (75%+) devoted to topics that fall generally under the umbrella of "tech." I generally keep the blog going with short link-comments when I'm not pushing out anything longer. The opinions expressed on this blog are mine alone and the content, including Red Hat-related content, is solely under my control. I’m also cross-posting to Medium when I feel it’s justified.

My biggest ambition this year is to publish a new book. This has been the subject of on-again, off-again mulling for the last 12 to 18 months or so. I began with the intent to just revise Computing Next to bring in containers and otherwise adapt the content to the frenetic change that’s going on in the IT industry today. However, as time went on, this approach made less and less sense. Too much was different and too many aspects required reframing.

Therefore, while I will probably repurpose some existing content, I’m going to start fresh. The pace of change still makes writing a book challenging but, given where we are with containers, xaaS, DevOps, IoT, etc. I’m hopeful that I can put something together that has some shelf life. My current plan is to shoot for something in the 100-120 page range (i.e. longer than pamphlet-style but shorter than a traditional trade paperback)  for completion by the summer. I’d really like to have it done by Red Hat Summit but we’ll see how likely this is. Working title is Phase Shift: Computing for the new Millennium and  it will focus on how new infrastructure, application development trends, mobile, and IoT all fit together. 

Podcasts. I grade myself about a B for podcasts last year. I think I had some good ones but wasn’t as aggressive about scheduling recording sessions as I could have been. I expect this year will end up similarly although I’m going to make an effort to bring in outside interview subjects on a variety of topics. I find 15 minute interviews are a good way to get interesting discussions out there without too much effort. (And I get them all transcribed for those who would prefer to read.)

Video. Video seems to be one thing that largely drops off my list. It’s takes a fair bit of work and I’ve struggled with how to use it in a way that adds value and is at least reasonably professional looking. It probably doesn’t help that I’m personally not big into watching videos when there are other sources of information. 

Social networks. I am active on twitter as @ghaff. As with this blog, I concentrate on tech topics but no guarantees that I won't get into other topics from time to time.

I mostly view LinkedIn as a sort of professional rolodex. If I've met you and you send me a LinkedIn invite, I'll probably accept though it might help to remind me who you are. I'm most likely to ignore you if the connection isn’t obvious, you send me a generic invite, and/or you appear to be just inviting everyone under the sun. I also post links to relevant blog posts when I remember.

I'm a pretty casual user of Facebook and I limit it to friend friends. That's not to say that some of them aren't professional acquaintances as well. But if you just met me at a conference somewhere and want to friend me, please understand if I ignore you.

I use Google+ primarily as an additional channel to draw attention to blogs and other material that I have created. I also participate in various conversations there. As with twitter, technology topics predominate on my Google+ stream.

I use flickr extensively for personal photography.