Wednesday, May 25, 2016

Issue #4 of my newsletter is live

This issue has links to an article I recently had published on public cloud security as well as to discussions around using Ansible with docker-compose and why it's important to orchestrate containers using tools such as Kubernetes.

Links for 05-25-2016

Thursday, May 19, 2016

Data, security, and IoT at MIT Sloan CIO Symposium 2016

As always, the MIT Sloan CIO Symposium covered a lot of ground. Going back through my notes, I think it’s worth highlighting a couple sessions in particular—in addition to the IoT birds of a feather that I led at lunchtime. They all end up relating to each other through data, data security, and trust.

Big Data 2.0: Next-Gen Privacy, Security, and Analytics moderated by Sandy Pentland of the MIT Media Lab

There were two major themes in this panel.

Sandy Pentland

The first was that it’s not about the size of the data but the insights you get from it. This is perhaps an obvious point but it’s fair to say that there’s probably been too much focus on how data gets stored and processed. These are important technical questions to be sure. But they’re technical details and not the end in itself.

I might be more forgiving had I not lived through the prior data warehousing enthusiasm of the mid- to late-1990s. As I wrote five years ago: "There are many reasons that traditional data warehousing and business intelligence has been, in the main, a disappointment. However, I'd argue that one big reason is that most companies never figured out what sort of answers would lead to actionable, valuable business results. After all, while there is a kernel of truth to the oft-repeated data warehousing fable about diapers and beer sales, that data never led to any shelves being rearranged."

However, the other theme is newer—or at least amplified. And that’s ensuring the security of data and the privacy of those whose data is being stored. One idea that Sandy Pentland discussed is the idea of sharing answers (especially aggregated answers) rather than raw data. See enigma.mit.edu as an example of a system that's designed to make it possible for parties to use and maintain data without having full access to that data. Pentland also noted that because systems such as this make it possible to securely ask questions across jurisdictional boundaries, they could help address some of the often conflicting laws about the treatment of personally identifiable information.

Getting Value from IoT

At my luncheon BoF table, we had folks with a diverse set of IoT experiences including Ester Pescio and Andrea Ridi of Rulex Analytics, Nirmal Parikh of Digital Wavefront , and Ron Pepin, a consultant and former Otis Elevator CIO. The conversation kept coming back to value from data. What data can you gather? What can you learn from it? And, critically, can you do anything with that data to create business value?

Per my earlier comment about data warehouses, gathering the data is relatively straightforward. It may not be easy, especially when you’re dealing with sensors that aren’t on your own property and therefore need dedicated networks of some sort. But the problems are mostly understood. It’s “just" a case of engineering cost-effective solutions.

But what data and what questions? Ron Pepin shared his experiences from Otis. Maintenance is a big deal for elevators. It’s also the main revenue stream; the elevators themselves are often a loss leader. Yet proactive elevator maintenance mostly consists of preventative maintenance on a fixed schedule. 

Anders Brownworth, Principle Engineer Circle, on Blockchain panel

It seems like a problem tailor-made for IoT. Surely, one can measure some things and predict impending failures. But it’s not obvious what combination of events (if any) are reliable signals for needed maintenance. There’s a potential for more intelligent and efficient maintenance but this isn’t a case where you can cost effectively just instrument everything—someone else owns the building—and the right measurements aren’t obvious. Is it number of hours, number of elevator door reversals, temperature, load, particular patterns of use, something else, or none of the above?

The Blockchain

Given the level of hype around blockchain, perhaps the most interesting thing about this panel by Christian Catalini of MIT Sloan was the the lack of such hype.

Interest, yes. Catalini described how blockchain is an interesting intersection of computer science, economics & market design and law. He also argued that it can not only make things today more efficient (which could potentially redefine the boundary of firms by reducing transaction costs) but also create new types of platforms.

That said, there was considerable skepticism about how broadly applicable the technology is. Anders Brownworth of Circle (which has a peer-to-peer payment application making use of blockchain) said that the benefits of blockchain are broadly in the area of time-based transactions, with interoperability, and with many able to audit those transactions. However, with respect to private blockchains outside of finance, “we trust all the people around the table anyway” and, therefore, the audibility that’s inherent to blockchain doesn’t buy you much.

In the same vein, Simon Peffers of Intel agreed that it’s "hard to let thousands of users have the same view of data with a traditional database. But some blockchain use cases would fit with traditional database.” He added that "There is a space for smaller consortiums of organizations that know who the parties are with other requirements that can be implemented in a private blockchain. Maybe you know who everyone is but don't fully trust them."

To sum up the panel: You’re usually going to be giving up some features relative to a more traditional database if you use blockchain. If you’re not making use of blockchain features such as providing visibility to potentially untrusted users, it may not be a good fit.

Photos (from top to bottom):

Sandy Pentland, MIT Media Lab

Anders Brownworth, Principal Engineer, Circle

Tuesday, May 10, 2016

Links for 05-10-2016

My newsletter experiment

There’s a certain range of materials–curated links to comment upon, updates, and short fragments–that to me have never felt particularly comfortable as blog posts or on twitter. Tumblr never quite did it for me and I’ve little interest in shoving content into yet another walled garden anyway. I’ve been thinking about trying a newsletter for a while and, when Stephen O'Grady joined the newsletter brigade, I figured it was time to give it a run. We’ll see how it goes.

Here’s a link to the first issue: https://www.getrevue.co/profile/ghaff/archive/19505

It includes some DevOps related links and short commentary, links to a couple of new papers I’ve written on security and deploying to public clouds, and upcoming events including Red Hat Summit in San Francisco at the end of June. (Regcode INcrowd16 saves $500 on a full conference pass!)

You can also subscribe directly to this newsletter here.

The need for precise and accurate data

8266473782 fef433d94b k

Death by GPS (Ars Technica):

What happened to the Chretiens is so common in some places that it has a name. The park rangers at Death Valley National Park in California call it “death by GPS.” It describes what happens when your GPS fails you, not by being wrong, exactly, but often by being too right. It does such a good job of computing the most direct route from Point A to Point B that it takes you down roads which barely exist, or were used at one time and abandoned, or are not suitable for your car, or which require all kinds of local knowledge that would make you aware that making that turn is bad news.

It's a longish piece that's worth a read. However, it seems that a lot of these GPS horror stories--many from the US West--are as much about visitor expectations of what constitutes a "road" as anything else. It's both about the quality of the underlying data and its interpretation, things that apply to many automated systems. 

According to Hacker News commentator Doctor_Fegg:

This is clearly traceable to TIGER, the US Census data that most map providers use as the bedrock of their map data in the rural US, yet was never meant for automotive navigation.

TIGER classes pretty much any rural "road" uniformly - class A41, if you're interested. That might be a paved two-lane road, it might be a forest track. Just as often, it's a drainage ditch or a non-existent path or other such nonsense. It's wholly unreliable.

But lest you think data problems are in any way unique to electronic GPS systems, read this lengthy investigation into a 1990s Death Valley tragedy.

For what it’s worth, I did some cursory examination into what Google Maps would do if I tried to entice it into taking me on a “shortcut” through the Panamint Mountains in western Death Valley. My conclusion was that it seemed robust about not taking the bait; it kept me on relatively major roads. However, if I gave it a final destination that required taking sketchy roads to get there (e.g. driving to Skidoo), it would go ahead and map the route.)

After writing this, it occurs to me that for situations such as this, we need data that is both accurate (represents the current physical reality) and precise (describes that physical reality with sufficient precision to be able to make appropriate decisions).

Monday, May 09, 2016

Interop 2016: The New Distributed Application Infrastructure


The platform for developing and running modern workloads has changed. This new platform brings together the open source innovation being driven in containers and container packaging, in distributed resource management and orchestration, and in DevOps toolchains and processes to deploy infrastructure and management optimized for the new class of distributed application that is becoming the norm.

In this session, Red Hat's Gordon Haff discuses the key trends coming together to change IT infrastructure and the applications that will run on it. These include:

  • Container-based platforms designed for modern application development and deployment 
  • The ability to design microservices-based applications using modular and reusable parts 
  • The orchestration of distributed components 
  • Data integration with mobile and Internet-of-Things services 
  • Iterative development, testing, and deployment using Platform-as-a-Service and integrated continuous delivery systems

Tuesday, April 26, 2016

DevOpsDays London 2016

Devopsdayslondon 1

April London was cool. But DevOpsDays London was hot and happening, selling out its venue in the shadow of St. Paul’s Cathedral. In many respects, it was a fairly typical DevOpsDays event with a focus on organization, process, and culture over individual products and toolchains. 

In other respects, it reflected the evolution of DevOps from something most associated with Silicon Valley “unicorns” to a core set of principles, processes, and practices that are broadly applicable. Also reflecting a location not far from the City of London, Barclays was a major sponsor and both financial services firms and major system integrators were well-represented in the audience and in the booths. 

With that as preamble, here are some of the discussions and other topics that caught my eye in one way or another during the course of the two-day event.

Metrics matter

As Splunk’s Andi Mann  observed in an open spaces discussion, it’s nice to measure the things that you do—but it’s even better to measure what you actually accomplish. And better still is to measure accomplishments that closely map to business outcomes rather than IT outputs. 

One participant noted that “We had all these metrics. 1100 of them. We ran a report every month. But why do these metrics matter? Will it help someone make a decision on a daily basis?” Another wryfully observed that "shipping crap quicker isn't a metric anyone should want to measure."

This led to further discussion about the distinction between metrics, alerts, and logs—something that was also touched on in some of the presentations. Google’s Jeromy Carriere pointed out that, in contrast to logs that enable root cause investigation, "alerts need to be exciting. If they're boring, automate them."

Enterprise DevOps

As I wrote above, there was a significant enterprise, even conservative enterprise, angle to the event. For example, Claire Agutter talked about how to “Agile your ITIL.” (I suspect there are Silicon Valley companies lacking a developer who even knows how to spell ITIL.) 

Claire observed that “the reason companies look away from ITIL is it looks bureaucratic” even though "it's how IT gets done in many organizations.” She pointed out that the issue is that ITIL has been implemented as a slow-moving waterfall process in many organizations. However, it doesn’t need to be and, in fact, the best way to think about ITIL process is simply that it’s a consistent way of doing things. And what’s a great match for a consistent way of doing things? That would be automation (using a tool such as Ansible.)

Bimodal IT?

Arguments about definitions and appropriate models often seem a bit “how many angels can dance on the head of a pin”-ish to me. I mostly felt that way when I was an analyst (and analysts generally love creating definitions and models) and I certainly feel that way now. That said, it seems to have become sufficiently trendy to bash Gartner’s bimodal IT model (see e.g. Kris Saxton’s "Bimodal IT:  and other snakeoil” from this event) that I feel compelled to respond. 

Most of what I think is worth saying I have already and won’t repeat here. But, really, Kris largely made my general point in his talk when he said: "A lot of people take away the headlines. The details are largely sane but [bimodal is] most problematic as a vision statement communicated from the C level.” I guess I have trouble seeing the problem with a largely descriptive model for enterprise IT that will inevitably be upgraded and replaced in pieces and at different rates. And CIOs who don’t bother to read beyond the headlines and latch onto this (or any other model) to justify simply maintaining the status quo? Well, that organization has bigger problems than a Gartner model that’s possibly insufficiently nuanced or visionary.

DevOpsSec

I led an open spaces discussion on best practices for security in a DevOps world especially when there are compliance and regulatory issues to consider. We actually ended up having two back-to-back security discussions; the one prior to mine focused on what “tolerate failure” means in a security/risk context. In practice, the discussions flowed into each other. In any case, the only issue was that so many people wanted to participate that it was a bit hard for everyone to pack themselves in!

The shared experiences around security were generally consistent with what I’ve heard in other discussions of this type. For example, there was a lot of interest in automated vulnerability scanning using tools such as OpenSCAP. Also mentioned was using human and machine-readable formats such as Ansible Playbooks to document processes and ensure that they’re followed consistently. (Alas, also consistent with other discussions was the familiar refrain that a lot of auditors are still not prepared to move beyond whatever paper-based checklists they’re already familiar with.)

My “the times they are a changin’” moment came though when someone piped up that he was one of those security guys that are often described as roadblocks to rapidly releasing software. He went on to add that this was the first conference he had ever attended that was not an explicit security conference and he was going to go back to his company and recommend that the security team attend more events of this type. This really highlighted just how siloed security processes can be while providing a hopeful illustration that DevOps really is starting to create new opportunities for collaboration and communication.

This last point is crucial. I know folks who get a bit grumpy about the degree to which DevOpsDays majors on culture rather than the cool tool du jour. Tech is important both as a platform and a toolchain for DevOps certainly. However, so many of us operate in an environment where it’s so natural to fixate on the latest shininess that it’s useful to be regularly reminded about the degree to which culture and more open organizations are even more fundamental components of digital transformation.

Monday, April 11, 2016

Connected Things 2016 recap

Screen Shot 2016 04 11 at 3 32 50 PM

The Internet-of-Things (IoT) and DevOps seem to be in a race to win the “most conferences/events” race. The IoT corner notched a pair last week with the Linux Foundation’s new OpenIoT Summit in San Diego and Connected Things 2016 put on by the MIT Enterprise Forum at the Media Lab in Cambridge.

I haven’t looked at the contents from the OpenIoT Summit but I do have thoughts from Connected Things that mostly reinforced everything else I see going on in the space.

Everyone’s talking.

This 500 person or so event sold out. This is clearly a hot topic and there’s a sense that it must be important. As we’ll see, the whats, the hows, the whys, and the the wherefores are a lot fuzzier. I’ve been through plenty of these new technology froths and I’m not sure I’ve ever seen quite such a mismatch between the hype and today’s more modest reality. No, hype’s not even quite right. It’s almost more of a utopian optimism about potential. Cue keynoter Rose, the author of Enchanted Objects: Design, Human Desire, and the Internet of Things. This is about cityscapes and intelligent spaces and the automation of the physical world.

But what is it?

At a high level, I think the definition or definitions are pretty straightforward. There’s an element of interfacing the physical world to the digital one. And there’s a big role for data—probably coupled with machine learning, real-time control, and machine-to-machine (M2M) communications. 

But how should we think about the market and where’s the value? Things get a lot murkier. 

(As I was writing this, an email literally popped into my account that read in part: "That brand new car that comes preloaded with a bunch of apps? Internet of Things. Those smart home devices that let you control the thermostat and play music with a few words? Internet of Things. That fitness tracker on your wrist that lets you tell your friends and family how your exercise is going? You get the point.” My point is that we have to refine our thinking to have useful discussions.)

At Connected Things, IDC’s Vernon Turner admitted that "It is a bit of a wrestling brawl to get a definition.” (For those who don’t know IDC, they’re an analyst firm that is in the business of defining and sizing markets so the fact that IDC is still trying to come to grips with various aspects of defining IoT is telling.) 

In general, the event organizers did make a gallant attempt to keep the sessions focused on specific problem classes and practical use cases but you were still left with the distinct feeling that the topic was coiled and ready to start zinging all over the place.

Data data everywhere. What do we do with it?

Data is central to IoT. Returning to Vernon from IDC again, he said that “By 2020, 44 zettabytes of content will be created (though not necessarily stored). We’ve never seen anything that scales at this magnitude before.” He also said that there will be a need for an "IoT gateway operating system where you aggregate the sensors in some meaningful way before you get the outcome." (I’d add at this point that Red Hat, like others, agrees that this sort of 3-tier architecture--edge, gateway, and cloud/datacenter—is going to generally be a good architecture for IoT.)

What’s less clear is how effectively we’ll make use of it given that we don’t use data very effectively today. McKinsey’s Michael Chui, on the same panel, noted that "less that 1% of the data collected is used for business purposes—but I expect an expansion of value over time in analytics.” I do expect more effective use of data over time. It’s probably encouraging that retail is leading manufacturing in IoT according to Vernon—given that retail was not a particular success story during the c. 1990s “data warehouse” version of better selling through analytics. 

Security matters—but how?

I’m tempted to just cut and paste the observations about security I made at the MassTLC IoT conference last year because, really, I’m not sure much has changed.

MIT’s Sanjay Sarma was downright pessimistic: “We have a disaster on our hands. We'll see a couple power plants go down. Security cannot be an afterthought. I'm terrified of this."

No one seemed to have great answers—at least at the edge device level. The footprints are small. Updates may not happen. (Though I had an interesting discussion with someone—forget who—at Linux Collaboration Summit last week who argued that they’re network devices; why shouldn’t they be updated?) Security may to be instantiated in the platform itself using the silicon as the secret. (John Walsh, President, Sypris Electronics). There was also some resignation that maybe walled gardens will have to be the answer. But what then about privacy? What then about portability?

There’s a utopian side to IoT. But there’s a dystopian side too.

Sunday, April 10, 2016

Building a garage hoist for my canoe

IMG 1306

A couple of weeks ago, I finally got around to putting together a system that could 1.) Get my canoe into my garage in the winter when there are two vehicles there, 2.) Allow one person to lift it into position, and 3.) Fit it around existing structures, hardware, and other stored items. I’d been storing it on a rack outside but, especially with Royalex no longer being made, I wanted to treat it with a little more care.

The trickiest part, as you can see from the first photo, was that there’s a relatively small three-dimensional volume to fit the 17 foot canoe into. It had to go front to back, clear the garage door opener and garage door, ideally not force me to move the sea kayak, and have room for my small car to slide in underneath. It did all work, but barely, and it meant that I needed to cinch it up fairly precisely.

To start with, I just installed a couple of pulleys to lift the boat, but a Tripper with outfitting weighs over 80 pounds and it was just too heavy to readily lift up and then cinch into precise position. 

Now you can deal with the weight problem by adding additional pulleys so that you’re pulling more rope with less force. However, it can be hard to get the canoe to pull up evenly and I could never get this working in a way that positioned the boat as precisely as I needed it to be.

IMG 1307

I next considered an electric winch. I went so far as to buy one and I think it would have worked but I was having trouble finding an ideal place to mount it and it seemed like overkill.

The solution I ended up with was a manual 600 pound winch that cost under $20 from Amazon. As you can see, two lines go up to a pair of pulleys. (I have overhead beams for storage on this side of the garage so I ended up just tying the pulleys to the existing beams.) One of the lines then heads down over the swivel pulley and is clipped into one cradle holding one end of the canoe. The other line goes through its pulley, which changes its direction 90 degrees to run to the other end of the canoe where a final pulley drops it down to be clipped into the other cradle. 

Don’t read too much into the exact pulleys I used. I had a couple laying around and I bought another couple of “clothesline” pulleys at Home Depot. I could probably have mounted the canoe right side up with just a sling but I think I was able to get it up a little higher this way. (It’s a tight fit; I guess if I ever get a bigger car, I’ll have to revisit this. The canoe gets transported on an SUV.)

IMG 1308  1

I’ll probably add a couple more clips to the system just to make it a little easier to position the cradles. And, before next winter, I’ll put a backup safety sling of some sort in place. But, overall this system seems to work very well. It takes very little effort to hoist the canoe into place and, once all the rope lengths and slings are properly adjusted, it’s very repeatable and straightforward. The canoe hangs down a bit lower than is ideal but that’s pretty much dictated by the garage door layout.

Wednesday, April 06, 2016

Specialists may have their day

Irving Wladawsky-Berger, who among other accomplishments led IBM’s early Linux efforts, has a great post up regarding special report on Moore’s Law in The Economist. Among the highlights:

Tissues, organs and organ systems have evolved in living organisms to organize cells so that together they can better carry out  a variety of common biological functions.  In mammals, organ systems include the cardiovascular, digestive, nervous, respiratory and reproductive systems, each of which is composed of multiple organs.

General purpose computers have long included separate architectures for their input/output functions.  Supercomputers have long relied on vector architectures to significantly accelerate the performance of numerically-intensive calculations.  Graphic processing units (GPUs) are used today in a number of high performance PCs, servers, and game consoles.  Most smartphones include a number of specialized chips for dealing with multimedia content, user interactions, security and other functions.  Neural network architectures are increasingly found in advanced AI systems.  

As in evolution, innovations in special-purpose chips and architectures will be increasingly important as Moore’s Law fades away.

I agree with Irving. When I was an analyst I saw specialized architectures largely fail because "why bother when Moore's Law would get you there in a year or two anyway?" I'm not sure the implications of losing the CMOS scaling lever are as widely appreciated as they should be. (The former head of DARPA microelectronics peg them at about about a 3500X improvement over the past couple of decades; you don't lose a lever like that and just go on with business as usual.)

This will have a lot of implications for software certainly. I also wonder about the broader implications of smaller, lighter, cheaper, faster increasingly no longer being a given.

I wrote about this in more detail after the SDISummit in Santa Clara at the end of last year.

Wednesday, March 30, 2016

Links for 03-30-2016

Monday, March 28, 2016

The "laptops" in my bag

I took one of my Chromebooks to an event last week and a couple of people asked me about it. So I thought I’d take the opportunity to talk about my larger devices as an extension to my recent “What’s in my bag?” post. 

In general, I carry a laptop-like device and a tablet-like device. Laptop-like devices are really a lot better for typing on and tablet-like devices are better for reading or watching video on a plane. And I haven’t found anything that really switches between the two modes well. I’m happy to stipulate that the Microsoft Surface Pro 4 may be that device for some people but I’m really not in the Microsoft ecosystem and longer so that doesn’t work for me at this point. 

More on tablet-like devices in a later post but, usually, my laptop-like device for travel is my 2015 13” MacBook Pro. Because it’s a full laptop, it’s the most versatile thing to take with me—especially if I might not always be connected. It weighs about 3.5 pounds and is .71 inches high. For me, this is about the perfect compromise for working at a desk and traveling. The smaller MacBook models are just a bit too small or tradeoff things like a second USB port that keep me from wanting to use day-in and day-out. I do value compactness and lightweight when I’m traveling but I find that, by the time you add chargers and dongles and various adapters, another pound or so of laptop just isn’t a big deal. This is still a very svelte laptop by any historical standard.

How about Chromebooks?

First, let me share my thoughts on Chromebooks in general and then I’ll get to a couple specific models. 

Chromebook are pretty awesome for the right use. At around $250 for many models, they’re a great match for doing web-based tasks (browsing and online office suites or even many software development tasks). You even get a hidden Crosh shell that gives you utilities like ssh. You’re not totally dead in the water if you go offline—for example, Evernote switches back and forth between connected and non-connected modes pretty smoothly—but they’re definitely oriented toward situations where you have reliable WiFi. (On the one hand, this is increasingly common; tech conferences notwithstanding. On the other hand, it’s also hard to do many things disconnected even if you have a full laptop.)

For $250, you’re not going to get high-resolution screens, backlit keyboards, or things like that. But my 13” Dell Chromebook from 2014 sits on a table downstairs in my house where I often find it more convenient for doing a lot of searching than using a tablet. (Yes, I could go find my laptop but I find it being “just there” handy.)

A variety of higher-priced Chromebooks out there have more features. Personally I get a lot less interested in a Chromebook as it gets to a ~$500 price point and beyond given that it won’t replace a laptop for most people.

More interesting from a travel perspective is a device like the Asus Chromebook Flip. It’s a 10.1” laptop that weighs about 2 pounds. The touch-sensitive screen also flips into a tablet mode. In my experience, it also has pretty reliably “all day” battery life which is probably a couple hours longer than my MacBook. If I don’t need more than a Chromebook and want to go lightweight, this is what I carry.

A few caveats:

Unlike my 4GB Dell, I have the 2GB memory Asus model—mostly because that’s all they had at the Best Buy when I needed something during a trip when I accidentally left my MacBook at home. It does stutter every now and then if there are multiple tabs open, so go with 4GB. 

The keyboard is fine, but it is small. I have no issue with using this as a travel laptop but I wouldn’t want to type with a keyboard this size all the time.

The tablet mode is “OK.” By that I mean it feels a bit weird having the keypad under your fingers when you’re holding the folded laptop though it can be used as an ebook reader in a pinch. I also don’t normally get movies and TV from Google Play so I don’t have a simple source for video content. This isn’t a problem with the device so much as the fact that Google is yet another separate ecosystem for content that you may or may not already be using.

So. For most work trips today, my MacBook Pro still usually wins. It’s just more versatile and I have access to a lot of presentations and other materials even if I’m not online. But, it’s not hard for me to imagine smaller (perhaps convertible in some way) devices becoming a more practical travel option over time.

Sunday, March 20, 2016

What's in my bag?

Inmybag

These pieces about travel gear seem to be popular and I travel a lot, so here you go. Nothing on clothes or footwear here but I cover most everything else.

I've previously written about my carry-ons. Depending upon the trip, I typically either bring an Osprey travel backback or a Patagonia over-the-shoulder/backpack-in-pinch. It really depends on how much schlepping I'll be doing. I have a variety of other bags for when I'm checking luggage or am not travelling by air, but those two cover at least 80 percent of my air trips.

I usually carry a Timbuk 2 messenger bag as my "personal piece" as the airlines like to refer to it. This is also my day-to-day "briefcase." Comfortable to carry, nice internal compartments, rugged as heck. You can also stuff a lot into it in a pinch. The main downside is that the material is heavy duty so doesn't stuff down as much as I'd like when I consolidate it into another bag. Nor does it make a particularly good "man purse" for walking around town; it's too big. So I have a couple of other fanny packs or over-the-shoulder bags I carry when I don't need something bigger (as I often don't with laptops more petite than they used to be).

I tend to switch various bags around from trip to trip, so one thing I've found important is to compartmentalize contents. I use two primary bags for this.

An Eagle Creek Pack-It Specter is made out a thin, light high-tech material. (Technically I guess it’s for toiletries.) For most trips I find this is perfect for holding:

  • Spare pair(s) of reading glasses in case
  • Bottle opener (plus USB key)--thanks OpenShift by Red Hat
  • Corkscrew
  • Small first aid/medical kit
  • Prescription medications
  • Spare contact stuff
  • Travel-specific electronic adapters: e.g. ethernet dongle and ethernet cable, international plugs, car power adapter
  • Small plug-in microphone for iPhone
  • An envelope or two
  • Small plastic bags
  • Earplugs
  • Very small notebook
  • Chapstick
  • Wetnaps
  • Earplugs

For a longer trip or one that needs  more of this miscellaneous gear than average, I have a second one of these bags that I can use or I consolidate this bag and bulkier discrete items into an Eagle Creek half-cube or something along those lines.

My day-to-day mesh ditty bag that holds all my electronic cables, etc.:

  • USB plug
  • USB auto "cigarette lighter" adapter
  • USB to XYZ (Lightning, micro-USB, etc.) adapter cables
  • Hands-free adapter for telephone
  • Good ear canal headphones (I use Klipsch E6 but I'll probably splurge for Bose noise canceling ones of these days)
  • External battery to charge phone. I have a Tumi that I was given. It's bigger than my other ones but it does hold a lot of juice so that's what I carry.
  • Business cards/case
  • Remote laptop control for presentations (I use a Logitech model)
  • Any necessary dongles for laptops. (I assume VGA output only unless I've been told otherwise. I do have an HDMI dongle for my primary laptop and a retractable HDMI cable to use with hotel TVs but I don't routinely bring those.
  • Plastic "spork"
  • Small LED headlamp
  • Pens

The retractable cables are nice although, if you look at the photo, you'll see it's a bit of a hodgepodge given that some of this is stuff I've picked up at tradeshows etc. Make sure that higher-current devices like tablets will actually charge using the parts you bring.
I've tried out Chromecast and travel routers for hotel rooms but I've given that up as being too associated with the pain that happens whenever you fiddle with networking gear.

Prescription glasses in a tough case

iPhone 6

Given said iPhone 6, I don't regularly carry my Canon S100 any longer even though it shoots raw pics and has an optical zoom. I do have both Fujifilm and Canon systems as well and I'll bring one or the other--usually the Fujifilm EX-1 (along with associated batteries and chargers)--if I'm doing more serious photography.

The requisite quart Ziplock for toiletries of course.

For a long time, I looked for a travel portfolio to carry my passport, spare cash, backup credit card, and various other info/cards that I like to have with me on most trips. I had a couple tall portfolios that were too big; you don't really need a portfolio that carries a sheef of airline tickets these days. I had a nice leather one I was given that was about the right size but it didn't zip up; I stopped using it after I lost some of its contents when they fell out one trip. I finally found one by Eagle Creek that is just right for me. (They don’t seem to make the one I have any longer; this looks like the current equivalent.)

Typically my MacBook Pro (but sometimes an Asus flip-top Chromebook) plus (usually) a tablet device of some sort whether a Kindle Paperwhite or an iPad 3.

(Mostly for longer trips) a thin 8 1/2 x 11" plastic portfolio to carry tickets, printed-out information, maps, etc. Yeah, a lot of this could be (and is) on my phone but I find carrying some paper backup to often be useful.

I usually just carry my regular wallet (leather, not a lot in it, put in a bag or a side pocket) though I do have various zippered wallets that hang around the neck or otherwise aren't in a pocket that I'll sometimes take for non-city trips.

Nylon (or whatever) reusable grocery bag. Weights nothing and more and more places are starting to charge for bags. Can be handy to organize stuff as well.

I have a small lightweight mesh laundry bag I often bring but almost any sort of bag will do.

Sometimes I pack either a foldable duffle or a foldable day pack for extra space on the return leg.

I'll close by noting that I don't typically bring everything listed here on a single trip and certainly not on the all-too-typical out-and-back to a hotel and office trip. That said, I do try to keep the "standard gear" relatively compartmentalized and ready to grab and go, even if I could trim it back a bit for a given trip. Other items that aren't part of my day-to-day routine I mostly keep in a box which I can go through if I'm going to take a longer/international/more varied trip.

Friday, March 18, 2016

DevOps Lessons presentation at IEEE DevOps events

Earlier this week I spoke at the IEEE DevOps for Hybrid Cloud event in "Silicon Beach," CA (aka El Segundo) at the Automobile Driving Museum. (Did I mention this is outside LA?) I've given variants on this talk before but I'm continually refining it. It seems to go over well although I'm always worried that I try to cover too much ground with it. In any case, we had a great audience. It was probably one of the most interactive and engaged crowds I've had in a while."

Here's the abstract:

Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.

Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:

  • Container-based platforms designed for modern application development and deployment.
  • The ability to design microservices-based applications using modular and reusable parts.
  • Iterative development, testing, and deployment using platform-as-a-service and integrated continuous delivery systems.

Monday, March 14, 2016

It's about team size: Not monolith vs. microservice

16080414598 35f2b36964 k

Basecamp’s David Heinemeier Hansson has written probably the most readable and balanced dissection of the monolith vs. microservices debate that I’ve run across. Go ahead and read it. A couple choice quotes:

Where things go astray [with microservices] is when people look at, say, Amazon or Google or whoever else might be commanding a fleet of services, and think, hey it works for The Most Successful, I’m sure it’ll work for me too. Bzzzzzzzzt!! Wrong!

DHH goes on to write that

Every time you extract a collaboration between objects to a collaboration between systems, you’re accepting a world of hurt with a myriad of liabilities and failure states. What to do when services are down, how to migrate in concert, and all the pain of running many services in the first place.

...all that pain is worth it when you have no choice. But most people do have a choice, and they do have an alternative. So allow me to present just one such choice: The Majestic Monolith!

The Majestic Monolith that DHH describes is essentially a well-architected (mostly) monolithic application that is well-understood by the individuals working on it. 

A point worth highlighting here. A team of about 12 programmers works on the Basecamp application described in this post. That’s not all that much bigger in team size that Amazon’s “two-pizza” team size which, in turn, is often equated with small, bounded context, single function teams that develop individual microservices. 

And that’s a key takeaway I think. I’m not sure this is, or should be, a debate about monoliths vs. microservices. Rather, in many cases, it’s a discussion about team coordination. Prematurely optimize into patterns based on tiny discrete services and you silo knowledge and create architectural complexity. Let individual applications grow too large—especially in the absence of of a common vision—and you get brittle and inflexible apps.

Either make components notionally independent of each other (microservices) or you’d better plan on efficiently coordinating changes. 

Links for 03-14-2016

Friday, March 04, 2016

At least one of our long national nightmares is over

Sco is finally dead parrot dead

An interesting piece of news crossed my desk (well, actually appeared in my browser) this week: The (presumably) final resolution of the entire SCO saga. If you missed it, that’s not entirely surprising. The long, sordid saga was effectively put to bed a long time ago when SCO lost some key court decisions and went bankrupt. However, there remained a complicated set of claims and counterclaims that were theoretically just dormant and could have been reanimated given a sufficiently bizarre set of circumstances. 

However, on February 26:

Plaintiff/Counterclaim-Defendant, The SCO Group, Inc. (“SCO”), and Defendant/CounterclaimPlaintiff,International Business Machines Corporation (“IBM”), jointly move for certification ofthe entry of final judgment on the Court’s orders concerning all of SCO’s claims, including the(a) Order filed on Feb. 5, 2016, granting IBM’s Motion for Partial Summary Judgment (DocketNo. 782), (b) Order filed on Feb. 8, 2016, granting IBM’s Motion for Partial Summary Judgment(Docket No. 783), (c) Partial Judgment Dismissing SCO Claims filed on July 10, 2013, and (d)Order filed on July 1, 2005, denying SCO’s Motion for Leave to File a Third AmendedComplaint (Docket No. 466). 

There’s more legalese but this would seem to be as much of a wrap as there ever is in the legal world.

I started covering this drama back in 2003 when SCO and their lawyers did their roadshow to industry analysts to show off the code that had been purportedly copied into Linux. (I was working at Illuminata at the time.) We wouldn’t sign their NDA but they showed us some code anyway and I ended up writing a research note “SCO’s Derived Case Against Linux.” I’m sure it got some of the details wrong but this was before it was particularly clear what was even being claimed. (Of course, that would remain a pattern throughout much of the case.)

I then ended up helping my colleague Jonathan Eunice write an expert witness report for IBM once those cases got rolling. I haven’t been able to discuss that fact or anything else about the case while the claims and counterclaims remained open. It was a busy number of months working on that report. In all, it was a fascinating experience although one I’m not sure I would want to make a practice of. It also gave me an appreciation for why lawsuits like these are so incredibly expensive. 

Unfortunately, the expert witness reports remain under court seal and that’s unlikely to change. That’s a bit frustrating both because I think we did some good work that ended up not really being used and because there’s a lot of historical information about the claims SCO made that will probably never see the light of day. But, in any case, I still can’t say too much about the details that I know.

The whole set of cases was such a weird trip down the rabbit hole. Probably the confusion over who owned the UNIX copyrights is Exhibit A. Wouldn’t you have thought the executives involved with the supposed sale would have remembered and that the contract would have been crystal clear on this basic point? One would but this is the SCO saga we’re talking about. 

It’s hard to argue that the SCO cases hurt open source and Linux. Perhaps they slowed down adoption in some circles. But the fact that Linux made it through what, at one time, looked to be a serious threat perhaps even strengthened it in the long run. 

Tuesday, March 01, 2016

2016 MIT Sloan CIO Symposium

Logo 2015

I just received a notice for this year’s MIT Sloan CIO Symposium that’s happening at MIT on May 18. I’ve covered it as press for a number of years; here’s my story from last year. It always has good speakers (with a nice mix of business and academic)—as well as panels that are better than the norm at conferences. 

This year’s theme is “Thriving in the Digital Economy” with topics including:

  • Impact digital has on the nature of work, the workplace, and innovation
  • Big Data 2.0 [1] and Data’s Strategic Role
  • Platform Strategies, IoT, Cybersecurity, and Blockchain

I’m particularly interested in the blockchain session which Julio Faura of Santander is giving.

A call for applications for the Innovation Showcase which will feature 10 early-stage companies that are providing "innovative and valuable IT solutions” at the event is also now open. Deadline for submissions is March 26. 

[1] I’m not so sure we ever really achieved Big Data 1.0, but I digress.

Links for 03-01-2016

Friday, February 26, 2016

Thursday, February 11, 2016

Links for 02-11-2016

Friday, January 22, 2016

Book Review: Cooking for Geeks, Second Edition

As a single book, this combination of food science, recipes, equipment, ingredients, experimentation, interviews, and geeky humor is hard to beat. It’s not necessarily deep in all those areas, but it’s an appealing total package for those curious about the why’s of food. 

It’s the second edition of this book by Jeff Porter. At 488 pages, it’s about 50 pages longer than its predecessor. There are new interviews and graphics along with a fair bit of updating and rearranging from the prior edition—although the overall, look, feel, and organization aren’t a major departure. 

The book is written in a lighthearted and gently humorous way. Random sample from the intro to Taste, Smell, and Flavor: “You open your fridge and see pickles, strawberries, and tortillas. What do you do? You might answer: create a pickle/strawberry burrito. Or if you’re less adventurous, you might say: order pizza. But somewhere between making a gross-sounding burrito and ordering takeout is another option: figuring out the answer to one of life’s deeper questions: How do I know what goes together?” Probably not to everyone’s taste I realize, but it works for me.

It covers a broad swath of the science. The aforementioned tastes, smells, and flavors. Time and temperature—and what those mean for cooking proteins and melting fats. Additives like emulsifiers and thickening agents. Air, water, and leavening agents.It’s not the science tome that is Harold McGee’s On Food and Cooking, but it’s a more easily accessible presentation. (Though, if you read this book and enjoy it, by all means pick up McGee and vice versa.)

Cooking for Geeks at least touches on most of the major modernist cooking practices including sous vide and practical tips for same. Arguably, some of the DIY material around sous vide is a bit dated given the price drops of modern immersion circulators but this is about experimentation after all. (The link in the book does go to a list of current equipment options though.) There are also interviews with many of the usual suspects in that space such as Nathan Myhrvold and Dave Arnold.

Is this book for the cooking-curious geek who doesn’t have much real cooking experience? It could be but they might want to pair this book with another that was more focused on basic cooking techniques. The recipes here are relatively straightforward and the instructions are clear, but there’s not a lot of photography devoted to the recipes and the instructions for things like B├ęchamel Sauce are probably a bit bare-bones for a first-timer. 

I’d also generally note that the recipes are often there to provide examples of the science discussion. There isn’t a lot of discussion about why this specific recipe is being made with this specific set of techniques. For that sort of thing, I recommend book(s) from the America’s Test Kitchen empire, perhaps starting with their The New Best Recipes book—which also has the virtue of being a pretty comprehensive set of basic and not-so-basic recipes. It’s also rather sober and by-the-numbers, a much different experience. (Alton Brown also seems to have his followers in geeky circles although I’ve never personally been all that enthusiastic.)

One final point is that, for many, this is a book you will flip through and alight on a topic of interest. It’s not that you couldn’t read it from cover to cover, but the many sidebars and interviews and short chunks of material seem to encourage non-linear exploration. 

Bottom line: 5/5. Highly recommended for anyone with an interest in the science of cooking even if they don’t want to get into deep chemistry and physics.

Disclaimer: This book was provided to me as a review copy but this review represents my honest assessment.

Links for 01-22-2016

The new distributed application development platform: Breaking down silos

2200884398 7d9fd616a0 o

A document came out of Gaithersburg, Maryland in 2011. Published by the National Institute of Standards and Technology it was simply titled “The NIST Definition of Cloud Computing.” If you attended tech conferences during that period, reciting some version of that definition was pretty much a requirement. The private, public, and hybrid cloud terms were in this document. So were concepts like on-demand self-service and resource pooling. As were the familiar Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) service models. 

NIST didn’t invent any of this out of whole cloth. But by packaging up a sort of industry and government consensus about the basics of cloud computing, they regularized and standardized that consensus. And, overall, it worked pretty well. Iaas was about provisioning  fundamental computing resources like processing, storage, and networks. SaaS was about providing applications to consumers.

As for PaaS? PaaS was about applications created using programming languages, libraries, services, and tools supported by the provider. 

Arguably, this PaaS definition was never as neat as the others. IaaS resources were easy to understand; they were like the resources you have on a server, except cloudier. And SaaS was just an app on the Web—application service providers (ASPs) reimagined, if you would. PaaS was sort of everything that was above infrastructure but below an application an end-user could run directly. Cloud-enabled middleware, hooks to add features to a single online service like Salesforce.com, single-purpose hosted programming environments (as Google App Engine and Azure were initially), and open extensible environments like OpenShift that could also be installed on-premise. Most fell broadly under the PaaS rubric. 

The NIST definition also didn’t really capture how the nature of the service model depends upon the audience to an extent. Thus, Salesforce.com is primarily a SaaS as far as the end-user is concerned but it’s a PaaS in the context of developers extending a CRM application. 

Today, I’d argue that the lines NIST drew probably still have some practical significance but the distinctions are increasingly frayed. IaaS platforms have almost universally moved beyond simple infrastructure. OpenStack has compute (Nova), storage (Swift and Cinder), and Networking (Neutron) components but it also includes database projects (Trove), identity management (Keystone), and the Heat orchestration engine to launch composite cloud applications. 

In many cases these higher-level functions can be either used standalone or replaced/complemented by more comprehensive alternatives. For example, in a hybrid cloud environment, a cloud management platform like Red Hat CloudForms (ManageIQ is the upstream project) provides multi-cloud management and sophisticated policy controls. The IaaS+ term is sometimes used to capture this idea of more than base-level infrastructure but less than a comprehensive developer platform.

In the case of SaaS, today’s APIs everywhere world means that most things with a UI also can be accessed programmatically in various ways. In other words, they’re platforms—however limited in scope and however tied to a single application.

But, really, the fraying is broader than that. I’ve argued previously that we’re in the process of shifting toward a new style of distributed application infrastructure and of developing applications for that infrastructure. It won’t happen immediately—hence, Gartner’s bimodal IT model—but it will happen. In the process, traditional specialties/silos (depending upon your perspective) are breaking down. This is true whether you’re talking enterprise buyers/influencers, IT organizations, industry analysts, or vendors. 

As a result, it's hard to separate PaaS--in the relatively narrow sense that it was first discussed--with the broader idea of an application development platform with middleware integration,messaging, mobile, etc. services. Red Hat's doing a lot of work to bridge those two worlds. For example, Red Hat’s JBoss Middleware portfolio of libraries, services, frameworks, and tools is widely used by developers to build enterprise applications, integrate applications and data, and automate business processes. With JBoss xPaaS Services for OpenShift, these same capabilities are being offered integrated with OpenShift. This lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.

The advantage of the xPaaS approach is that it doesn’t merely put middleware into the cloud in its traditional form. Rather, it effectively reimagines enterprise application development to enable faster, easier, and less error-prone provisioning and configuration for a more productive developer experience.Eventually all of the JBoss Middleware products will have xPaaS variants. In each case, the core product is exactly the same whether used in a traditional on-premise manner or as xPaaS, so apps can be moved seamlessly between environments. In the xPaaS environment, JBoss Middleware developers experience benefits from OpenShift-based user interface enhancements, automated configuration, and a more consistent experience across different middleware products.

Then DevOps [1] comes along to blur even more lines because it brings in a whole other set of, often, open source tooling including CICD (e.g. Jenkins), automation and configuration management (e.g. Ansible), collaboration, testing, monitoring, etc. These are increasingly part of that new distributed application platform as well as is the culture around iteration and collaboration that DevOps requires.

I have trouble not looking at this breakdown of historical taxonomies as a positive. It offers the possibility of more complete and better integrated application development platforms and more effective processes to use those platforms. It’s not the old siloed world any longer.

[1] I just published this white paper that gives my/Red Hat’s view of DevOps.

Photo credit: Flickr/cc https://www.flickr.com/photos/timbodon/2200884398