Tuesday, August 23, 2016

Video: Getting started with DevOps

At Red Hat Summit, Dylan Silva and I co-presented on getting started with DevOps. We focused on two scenarios. I discussed using OpenShift to create a complete DevOps environment and workflow. Dylan covered using Ansible to quickly and easily automate tasks that are getting in the way of developer and ops productivity.

Series of posts about modernizing virtualization

Screen Shot 2016 08 23 at 1 29 04 PM

In a recent series of posts on the Red Hat blog, I took a look at virtualization modernization with a particular emphasis on incrementally building off of existing virtualization investments and on the management of, often heterogeneous, virtualized environments.

The first post set the stage for the series, noting that:

Some readers might be thinking that virtualization is yesterday’s news. But it continues to play a major role within just about every enterprise IT infrastructure whether measured by the number of applications it touches, the expense of supporting it, or the number of administrators needed to manage it. At the same time, it’s often not used efficiently. At Directions 2016, IDC Group Vice President for Enterprise Infrastructure, Al Gillen, noted that virtual machine (VM) density is stalling out at about 10 VMs per server and between 30 to 50 percent server utilization. This leaves ample room for improved efficiencies and financial value.

The second post focused on getting things done faster such as by introducing self-service (with Red Hat CloudForms or with Red Hat Enterprise Virtualization itself), automating (e.g., with Ansible), and by simplifying integration. 

The third in the series looked at saving time and money—always high on the concerns of IT operations folks. Efficient management is a big piece of this given that, in many cases, the server sprawl that virtualization was often introduced to address simply became “VM sprawl,” a similar problem at even higher scale.

The virtualization platform itself can also save money. For example, performance features in Red Hat Enterprise Virtualization (RHEV) such as KSM memory overcommitment (which allows users to define more memory in their VMs than is present in a physical host) and  SR-IOV for the virtual machine network (which increases network throughput while decreasing latency and CPU overhead for near bare-metal performance) enable high VM densities. As of March 31, 2016 Red Hat held the x86 2-socket world record for SPECvirt_sc2013, the standard benchmark used to evaluate performance of datacenter servers used in virtualized server consolidation.

Finally, I discussed how security features and compliance relate to modernizing virtualization. Again, management plays a big role. Red Hat CloudForms provides robust mechanisms for cloud infrastructure with automation, advanced virtualization management controls, private or hybrid cloud management capabilities, and operational visibility. This includes aggregate logging capabilities that let you segregate, log, and allocate resources by user, group, location, or other attributes. Among other benefits, this helps you to find systems that are out of compliance so that you can take quick remedial action.

This complements the foundation provided by Red Hat Enterprise Linux and RHEV. For example, the RHEV security model takes takes advantage of the SELinux and sVirt capabilities in Linux--including mandatory access control (MAC) for enhanced VM and hypervisor security.

(For a broader picture of security and compliance at Red Hat, take a look at the whitepaper that I wrote earlier this year.)

Monday, August 22, 2016

Internet-of-Things (IoT) at Gartner Catalyst

1 kRiM0T4e7iHbREUGouUtOw

I was out at Gartner Catalyst in San Diego last week and I’ve been trying to mentally sort through what might serve as interesting observations from the event. It covers a broad range of topics relevant to technical professionals, so it’s been a bit hard for me to distill the sampling of sessions that I attended into a single storyline. However, an unrelated piece on IoT that I read this morning—and the graphic that graces this page—got me thinking about some themes both specific to IoT and applicable to emerging technologies more broadly.

Accelerating pace

At one level, the fact that IoT was prominently on display in a keynote, as well as in a variety of breakouts, is certainly not surprising. There was plenty on containers and container orchestration too. Gartner VP Eric Knipp even highlighted open source as a “cool forever” technology. Well, duh, you may be thinking. Do any of the cool kids not talk non-stop about topics such as these?

Here’s the thing though. How shall I put this nicely? The cool kids tended not to go to Gartner conferences or be Gartner clients historically, Indeed, for many shades of cool, that’s still the case. This isn’t predominantly a startup hub place.

Many of those conservative banks and manufacturing companies and logistics vendors now care about the latest technologies as well. They have to; digital transformation is a thing and the cost of doing nothing is higher than ever. They may adopt new IT approaches slower and more methodically than the Silicon Valley company setting VC dollars on fire. But they’re at least interested in learning about projects, products, and tech that may have not have even existed a couple of years ago. It’s a big change from the days when a lot of these folks weren’t especially interested in anything that wasn’t already in production at a hundred of other sites in their industry.

IoT: The Bad

Scenario from the keynote. Car window gets broken by a thief. The police are automatically summoned. The owner’s insurance company is informed. Repair quotes and automatically generated and a repair is quickly scheduled in time for that evening’s anniversary dinner plans.

Heartwarming. Efficient. A marvel of modern networked communications.

Skip over for the moment all the automagical seamless interaction of communication systems, document formats, and workflows designed to “just work.” At the risk of being a luddite, this degree of autonomous interaction with and between third-parties sets off my creep-meter. 

Perhaps I’m overreacting in this specific scenario. But I think it’s hard to argue with the fact that we’re being bombarded with more and more IoT examples in which it seems that someone stopped at the “can it be done stage” without much if any thought given to the “should it be done" question. Whether because it’s creepy. Or even just solves a problem that no one has.

IoT: The Good

Yet, the same keynote also featured an IoT use case from Duke Power that was exemplary on a couple of fronts.

For one thing, it was a joint presentation featuring leaders from both information technology (IT) and operational technology (OT) within the company. Driving cooperation between IT and OT was a key theme both in this session and woven throughout the conference as a whole. Reporting from the event, Craig Powers writes:

 “At Duke, OT and IT have been apart for many years—we barely touched,” says Shawn Lackey, director of strategy and architecture at Duke Energy. “OT was mainly analog; IT was moving towards digital—we needed to start bringing the two together.”

Lackey was speaking Monday during the opening keynote of industry analyst firm Gartner’s Catalyst Conference in San Diego.
But connecting OT and IT wasn’t necessarily easy, just ask Lackey’s colleague Jason Handley, director of smart grid technology and operations at Duke, who also spoke during the Gartner Catalyst keynote.

“I did not want to talk to IT to begin with,” Handley jokes. “But my attitude has changed. Legacy operations for OT were based on protection and control and nothing else is going to trump that—but now we need to move digital data, and that is only happening by partnering with the IT folks.”

It’s also just a good example of building toward an integrated system that meets genuine business and customer needs. The video goes into more detail but the basic idea is that, using real-time sensor and metering information, the grid will be able to quickly route around certain types of physical damage.

 

 IoT: The Ugly

The keynote didn’t have much to say about IoT security but breakouts dove into considerable detail. For example, Gartner’s Erik Wahlstrom covered “Securing Digital Access in IoT Solutions” while his colleague Erik T. Heidt spoke on “Securing IoT from Edge to Platform and Beyond."

A couple of common themes were lifecycle management and dealing with the diversity of edge devices.

For example, Wahlstrom noted that “sneaker net” is still a common way to provision identity in IoT; the problem is that when things are done this way, there’s no automatic way to provide updates and otherwise manage the device over time.

There is a lot of work going on with IoT security and identity management, including the development of new standards. For example, Enrollment over Secure Transport (EST), is "a new standard (RFC7030) designed to improve the lifecycle management of digital certificates, a key element for secure communications.” However, standards have to cover many different areas—this presentation by James Fedders of Intel gives a sense of just how many—as well as many different classes of edge device: small/smaller, connected always/sometimes, plugged in/low power, etc. For example, the aforementioned EST requires HTTP to work and therefore isn’t a fit for the most contained edge devices.

I’d sum up the security (using the term broadly) conversation is that there’s a general recognition that it’s important. And work is going on. But there’s a huge amount left to be done and, if security is valued in principle, I see far less evidence that it’s universally valued in practice. 

[1] Graphic by http://anandmanisankar.com/posts/IoT-internet-of-things-good-bad-ugly/

Video: Lessons learned on the DevOps front

At Red Hat Summit in June, Katrinka McCallum and Jay Ferrandini shared their experiences and lessons learned in the process of rolling out DevOps in the Product and Technologies organization. They call it their banana/pickle story, a reference to the pre-DevOps challenge of delivering to users the banana that they asked for instead of the pickle that they didn't want.

This was one of the highest-rated sessions in the IT Strategy track and Summit and is a must watch if you're interested in case studies about how real organizations are implementing DevOps, the challenges they face, and the benefits they gain.

Wednesday, August 03, 2016

Gordon's Hafftime #9: IoT

Topics:

  • ThingMonk coming up and placating the demo gods
  • Podcast relaunch upcoming
  • Thinking about IoT through a vertical lens

Get this issue

Subscribe

Tuesday, August 02, 2016

Links for 08-02-2016

Photos: Japanese temples, food, and more

Omicho Market, Kanazawa

After speaking at LinuxCon in Japan, I spent a week touring about Kyoto, Takayama, and Kanazawa. All my photos here.

Thursday, July 14, 2016

Presentation: Containers-Don't Skeu Them Up (Use Microservices Instead)

William Henry and I gave this presentation at LinuxCon Japan in 2016. (About 8 hours after getting a panicked 3am email to fill a no-show slot.) It's similar to what we gave at LinuxCon Dublin last year but there are a few updates.

Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.

In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.

Presentation: Fail fast, fail often

My colleague William Henry and I just gave this presentation at LinuxCon in Tokyo. It ties into central DevOps concepts such as experimentation, constant iteration, and having a culture that supports these types of activities.

Here was the abstract:

Software projects were historically managed on a bet the farm model. They succeeded or they failed. And when they failed (as big software projects often did), the consequences were typically dire for, not only organizations as a whole, but for many of the individuals involved. Today, by contrast, many software and the development projects have evolved toward a much more incremental, iterative, and experimental process that takes cues from the open source model which excuses (and even rewards) certain types of failure.

In this session, we’ll discuss how failure can be turned into a positive. This includes the organizational dynamics associated with tolerating uncertain outcomes, the need to define acceptable failure parameters, and the technical means by which experimentation can be automated in ways that amplify the positive while minimizing the effect of negative outcomes.

Tuesday, July 12, 2016

Gordon's HaffTime - Issue #7 is live

Issue #7 talks travels, Red Hat Summit, automation, and gives a shout-out to a really good game.

Become a subscriber.

Thursday, June 30, 2016

Red Hat Summit: Red Hat Identity and Access Management

Smite I Pal and Ellen Newlands talk identity management at Red Hat Summit.

In this session Red Hat’s Ellen Newlines and Dmitri Pal discussed Red Hat's identity management portfolio from a near-term perspective, and presented the long-term roadmap—along with some advice for implementing identity management.

These are some highlights and quotable moments from the talk.

Identity management is complex but it’s something you need to do to protect your environment and, ultimately, the assets of your organization. Red Hat is focusing on making IdM automated and cost effective so that customers can focus on their business. It’s Red Hat’s job to provide the expertise.

The areas of vision:

Authentication

You need to be able to authenticate from different types of credentials including passwords, certificates, smart cards and OTP tokens. And use single sign-on using Kerberos, SAML, and OpenID connect. Weave together multiple operating systems, multiple credentials, multiple authentication schemes (including a trust relationship between IdM and Active Directory in a Microsoft environment). 

Managed security.

Consistent Delivery

If you have consistency in the identities and access to them, you can deliver to systems, service

s, and applications together with policies to control access and privilege escalation. The goal is to make the use of this environment relatively seamless (even given the complexity).

Managed Security

The challenge is the management of identities in a complex interoperable world. The keys, the certificates, the other secrets need to be automatically provisioned, tracked, and rotated on an as-needed basis.

DevOps Enablement

Developers need to have the tools to build the next generation of containerized and non-containerized applications with authentication and the consistent delivery of security. If the developer can’t do this, what they’re doing isn’t much use in a production environment.

Some guidance for identity management:

  • Single source of identities. Don’t copy pass words around! It also makes it much easier for audit when identities are in a single place. 
  • Single sign-on is good. You need to protect the keys to the kingdom, but once you’ve established, use it as much as possible.
  • Don’t put passwords into files. Instead use Kerberos or certificates, or fetch secrets on the fly. When you build applications and stitch things together, think about how they’re going to talk with each other. It requires a bit more effort but don’t be afraid to move forward.
  • Automate your operations. We are in an era where changes are happening in real-time. Continuous integration and deployment of applications are needed to meet these business needs. Adopt the tools you need to do things in a simple, repeatable way. (For example, Ansible.)
  • Integrate applications so that applications can interface with each other in the context of the user. These interactions need to be managed—which is where an IdM Fabric, as shown below, comes in.

Identity Management Fabric.

Wednesday, June 22, 2016

Getting Started with DevOps (for devs) at Red Hat Summit

This post shares some highlights from Decomposing DevOps: Understanding How to Get Started, a session that I gave at Red Hat Summit in San Francisco this week together with Dylan Silva of Ansible by Red Hat.

Screen Shot 2016 06 22 at 10 52 24 AM

Our general approach in this session was to look at DevOps on-ramps first from a primarily developer-centric perspective and then from a more ops-oriented one. These aren’t mutually exclusive especially given that DevOps inherently commingles dev and ops concerns to a certain degree. Nonetheless, thinking about these two different constituencies is one useful way to frame the discussion.

For the dev-centric point of view about DevOps (which is what I’ll cover in this post), I find that a manufacturing metaphor provides useful insights. Whether building a car or building an application, what are some of the important principles to follow?

The first is automation. Worker productivity in automotive manufacturing has doubled in something like the past 20 years. Automating repeatable processes has simultaneously improved predictability and quality. (I guess it’s an exception to the “Better, faster, cheaper: Pick two” rule.) Similarly, developer automation that leads to an iterative CI/CD pipeline with build/test/approve/deliver/deploy stages leads to both faster and higher-quality software delivery.

Screen Shot 2016 06 22 at 1 45 43 PM

W. Edwards Deming, one of the original champions of statistical quality control, once said that “Without data you’re just another person with an opinion.”

Measurement and metrics are likewise a key ingredient in an iterative software development pipeline. DevOps metrics were actually the topic of a separate birds of a feather session discussion that I led with my colleague William Henry. A full discussion is beyond the scope of this post, but when thinking about collecting data it’s useful to step back and consider your most important objectives and then design a plan from that starting point. 

The automotive industry, like others, has embraced the idea of modularity and reuse; about half of all cars are built on a modest number of platforms that share key components and design elements across models and across brands. We see this same modularity in the concept of microservices—small, autonomous, bounded context services that communicate through APIs. Even when microservices aren’t adopted in their purest form, automated DevOps pipelines work most effectively when deployments can be small, frequent, and generally decoupled from other deployments. 

If DevOps can be thought of as transitioning software development from craftwork to a more industrialized set of processes, then the place where new cloud-native apps run can be equally thought of as morphing from a workshop to a factory. This factory has characteristics such as the following:

  • Software-defined infrastructure and/or public cloud
  • Automation
  • Application lifecycle management
  • Developer experience including self-service
  • Application services (databases, messaging, integration, mobile)
  • Container ecosystem
  • Orchestration and resource control
  • Management

There’s enormous innovation happening in all these areas across a wide range of open source communities. However, making all this consumable for developers is certainly a challenge. That’s a big reason why, in a study of DevOps early adopters conducted last year for Red Hat, IDC found that 80 percent expected PaaS to play a crucial role in enabling DevOps because “Platform-as-a-Service (PaaS) cloud infrastructure, self-service developer platform and  tools, and lifecycle management with DevOps processes—speeding time to value for both developers and operations."

That’s exactly what OpenShift does by bringing together technologies such as Red Hat Enterprise Linux, docker-format containers, kubernetes orchestration, CICD pipelines, and developer tooling. Features include:

  • Self-service
  • Multiple interaction models
  • Polyglot, multi-language support
  • xPaaS services
  • Immutable, container-based platform
  • Automation for application builds, deployments, scaling, and health management
  • Persistent storage option

The result? Developers get up and running quickly and gain access to the platform and tools that they need to be productive without sweating the details of the underlying infrastructure.

Tuesday, June 21, 2016

What are the right metrics for DevOps?

5569561425 305949e779 z

If you’ll be at Red Hat Summit check out our BoF and other DevOps sessions.

Ask about metrics for DevOps and the natural reaction is to jump to familiar technology-focused measurements. Uptime. Code deploys per hour, day, or month. Deployment failure rate. Even lines of code.

Certainly, it’s important to have metrics.

DevOps works because continuous iteration and improvement is fundamentally a more rapid and flexible approach to software development than slow rigid project cycles. However, fully realizing this approach to developing and deploying software means putting in place the measurement systems, technologies, and metrics to present actionable insights that can then be acted on appropriately. Adding to the complexity is the need to present appropriate metrics for different audiences such as developers, operations, and business leaders.

Tracking narrow technical indicators can indeed be useful as part of tracking the success of your DevOps initiatives. Analysis may point to trend lines that are pointing in bad directions. An increased number of failures that lead to customer-facing outages can hardly be anything but a negative indicator. Conversely, continuous improvements in measurements such as time to deploy new services are a good indicator that a DevOps initiative is helping to produce positive outcomes.

However, one needs to be careful about overly focusing on easy-to-measure numbers that aren’t necessarily particularly correlated with business outcomes. It can be useful to step back. What are you really trying to accomplish? What’s important to you?

Are you most focused on the deployment velocity of new services? Is improved code quality or more rapid security updates the more pressing factor behind your DevOps? Or are you taking a broader organizational view that emphasizes cross-team collaboration?

These are some of the questions that we’ll be asking as part of what’s sure to be a spirited discussion at the How to Know if Your DevOps is Working birds of a feather session that I’ll be moderating along with Red Hat’s DevOps Strategy Lead, William Henry.

Also be sure to check out the many other Summit sessions related to DevOps.

I’ll be co-presenting with Ansible GM Todd Barr in Getting Started with DevOps. We’ll be talking about on-ramps from both primarily developer-centric perspectives (with a focus on OpenShift) as well as more ops-centric ones (with a focus on Ansible). Other DevOps sessions include:

There are also a variety of sessions that directly focus on how organizations are successfully making use of Red Hat products to implement DevOps today. These include:

Finally, I’d just note that a big reason that DevOps is such a hot topic right now is that it’s part and parcel of a host of interrelated technology, architectural, and business model changes that generally fall under the term digital transformation. Containers, container management, microservices, software defined infrastructure, mobile, and the internet of things are among the technologies that dovetail with DevOps to enable organizations to deliver new digital services quickly and cost effectively. There are lots of Summit sessions on those topics too. Check them out.

Photo credit: Fickr/CC https://www.flickr.com/photos/christinawelsh/5569561425

Thursday, June 16, 2016

Newsletter Issue #6 - Red Hat Summit, OpenShift and more

Read it and/or subscribe!

The end of cattle vs. pets

6830903723 eb2df17454 z

Metaphors and models have finite lifespans. 

This usually happens for one of two reasons.

The first is that metaphors and models simplify and abstract a messy real world down to especially relevant or important points. Over time, these simplifications can come to be seen as too simple or not adequately capturing essential aspects of reality. (This seems to be what’s going on with the increasing pushback on “bimodal IT.” But that’s a topic for another day.)

The other reason is that the world changes in such a way that it drifts away from the one that was modeled.

Or it can be a bit of both. That’s the case with the pets and cattle analogy as it’s been applied to virtualized enterprise infrastructure and private clouds. 

The “pets vs. cattle” metaphor is usually attributed to Bill Baker, then of Microsoft. The idea is that traditional workloads are pets. If a pet gets sick, you take it to the vet and try to make it better. New-style, cloud-native workloads, on the other hand are cattle. If the cow gets sick, well, you get a new cow.

Pets and cattle roughly corresponded to the Systems of Record and Systems of Engagement taxonomy proposed by consultant Geoffrey Moore (of Crossing the Chasm fame). The former were stateful, big, long-lived, scale-up, and managed/maintained at the individual machine level. The latter were assumed to be stateless, small, transitory, scale-out, and managed at the level of the entire application (with individual VM instances destroyed and recreated in the event of a problem).

As an initial pass at distinguishing between traditional transactional apps and those designed along more cloud-native lines, the metaphor isn’t a bad one. I've argued that “ants” is a better fit than “cattle” because it captures the idea that individual service instances are not only disposable but they work together cooperatively to perform tasks. However, the overall concept of long-running mutable instances vs. short-lived disposable ones would seem to capture an essential distinction.

It still does, but as we as an industry continue to evolve DevOps practices and services-oriented architectural patterns for software defined infrastructure and orchestrated pools of containers, the metaphor is breaking down for several reasons. 

State matters

Many of the components/instances of a cloud-native application should be designed so that they are stateless. That is, they should use ephemeral storage—which is to say storage and data that only sticks around for the life of the instance itself. However, no one’s claiming that the data doesn’t need to live somewhere. For example, in twelve factor app parlance, there’s the concept of a backing service which "is any service the app consumes over the network as part of its normal operation. Examples include datastores (such as MySQL or CouchDB), messaging/queueing systems (such as RabbitMQ or Beanstalkd), SMTP services for outbound email (such as Postfix), and caching systems (such as Memcached)."

As we move to containerized infrastructures, the stateful vs. stateless dichotomy becomes particularly important because containers are explicitly designed to be immutable. As Keith Tenzer describes in this post about OpenShift v3 persistent storage: "Docker images are immutable and it is not possible to simply store persistent data within containers. When applications write to the Docker union file system, that data is lost as soon as the container is stopped.”

However, it’s still possible to associate persistent data with containers so that an entire application can be containerized. Keith goes on to note that:

OpenShift v3 supports using persistent storage through Kubernetes storage plugins. Red Hat has contributed plugins for NFS, ISCSI, Ceph RBD and GlusterFS to Kubernetes. OpenShift v3 supports NFS, ISCSI, Ceph RBD or GlusterFS for persistent storage. Kubernetes deploys Docker containers within a pod and as such, is responsible for storage configuration. Details about the implementation of persistent storage in Kubernetes can be found here.

Kubernetes allows you to create a pool of persistent volumes. Each persistent volume is mapped to a external storage file system. When persistent storage is requested from a pod, Kubernetes will claim a persistent volume from the pool of available volumes. The Kubernetes scheduler decides where to deploy the pod. External storage is mounted on that node and presented to all containers within pod. If persistent storage is no longer needed, it can be reclaimed and made available to other pods.

Most cloud-native applications require there to be persistent storage somewhere. While one can assume that it’s provided through a service running somewhere else, a platform supporting the development of complete cloud applications needs to provide persistence mechanisms within that platform.

Virtualization and cloud/software defined infrastructure convergence

Software-defined infrastructure technologies, also made initial simplifying assumptions in other areas such as networking architectures and the maintenance of running instances. Some of this was a sincere, if sometimes naive, desire to dump legacy encumbrances. However, it was also about getting an MVP out the door in a rapidly changing world. 

We’re seeing today the reintroduction of virtualization features required for “enterprise use cases” into projects such as OpenStack. Thus, Neutron (OpenStack networking) isn’t just about flat networking architectures and current versions of OpenStack support live migration of instances whether using shared storage or block-based storage associated with a single image. The fact that many of the same technologies such as the KVM hypervisor in Linux bridge enterprise virtualization and cloud technologies simplifies the bridging of the two worlds. (Of course, it’s probably increased the complexity of OpenStack relative to what it would look like in a purist cloud-only world. Such a purist OpenStack would also likely not be very useful.)

The continued evolution of the new application platform

Perhaps most of all though, the metaphor is breaking down because the idea that there are two canonical application architectures seems increasingly simplistic. 

I’ve already covered how persistent storage is an important component of most modern cloud-native applications. 

It’s also the case that many new applications will indeed be decomposed into lightweight single-function microservices that expose public APIs. However, getting to that point will be an evolution. Martin Fowler, who helped popularize the microservices term, has even argued for a “Monolith First” strategy in some cases, including for projects that are big enough to justify a shift to microservices over time. As a result, blanket statements about horizontal app scalability and disposable services don’t apply universally—even for apps that are greenfield and reasonably considered cloud-native.

Applications also have substantially different patterns that relate to how instances are clustered together using technologies such as Kubernetes (which OpenShift uses as its orchestration layer). Some types of applications are batch oriented in the vein of traditional high performance computing/grid while others are composed from multiple layers of services communication through APIs. There’s also considerable variety not only in the absolute scale of application components being scheduled and orchestrated, but also in the variety of the components (large, small, frequency of scheduling, etc.) and requirements related to quality-of-service, latency sensitivity, and so forth.

In short, while there are certain patterns that we tend to associate with cloud-native applications, there’s also much variety and even divergence in key aspects. Furthermore, it turns out that some traditional enterprise application characteristics such as persistent state and tightly-coupled components continue to play a role even for greenfield cloud apps. 

It’s not cattle and pets out there. It’s a whole menagerie!

Photo credit: https://www.flickr.com/photos/agrilifetoday/6830903723

Wednesday, June 08, 2016

Presentation: The New Platform-You Ain't Seen Nothing Yet

The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. Today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.

From CloudExpo, New York City

Links for 06-08-2016

Hybrid and IoT themes from CloudExpo

Screen Shot 2016 06 08 at 11 46 32 AM

There are a couple of themes that I seem to keep running into and this week at CloudExpo was no exception. Neither is exactly new. In fact, the first is something that some of us have been saying for a very long time. But both seem to have crossed some threshold to become a widely-understood normal.

The first of these is the acknowledgement that computing is heterogeneous and hybrid. From my perspective, this is barely worth remarking upon at this point with so many companies flinging around the word “hybrid” with wild abandon—however newly they’ve come to this particular reality. 

At this point, let me mention that I wrote a piece for CNET titled “There is no Big Switch for Cloud Computing” in 2009 when I was still an analyst. The Big Switch in question being the title of Nick Carr’s book in which he laid out the argument for an electric grid-like utility for computing. And my now-employer, Red Hat, has likewise been talking about portability across hybrid physical, virtual, private, and public cloud environments for almost as long.

Nonetheless, IBM’s Phil Jackson felt the need to emphasize the hybrid theme in his CloudExpo keynote. By itself, this probably wouldn’t have caught my attention given how many vendors are now belatedly embracing hybrid environments in their pitches. 

However, by coincidence, I also got into a conversation with John Mark Troyer, formerly of VMware and a generally smart guy. It started with his comment about “multi cloud” being a driver of projects like Kubernetes and DCOS (Mesos). He added that he was mostly thinking about "how conventional wisdom has shifted quickly from AWS-only to multi-cloud” and that "despite recent Oracle-Google ruling, cloning a hostile API is still not for the faint of heart."

He’s right. It wasn’t that long ago that there was a significant school of thought that the AWS API was key to any cloud strategy. That was essentially the whole basis of Eucalyptus’ business plan—allow organizations to build an AWS-compatible cloud. (HP bought Eucalyptus in 2014.) 

There are a variety of reasons why API compatibility with AWS largely dropped off the cloud agenda. That discussion would deserve its own piece. Suffice it to say though that there’s effectively been an ongoing and steady movement away from the view of cloud as a homogenized commodity toward something that’s hybrid in place, hybrid in service type (IaaS, PaaS, and SaaS), and hybrid in the types of capabilities offered, and hybrid in the audience for which it’s designed.

The second is that IoT discussions can be maddeningly unfocused. It’s about so many largely orthogonal topics that it’s hard to talk about technologies, business models, ecosystems, etc. associated with IoT except in the most general sense. Sure, we can define it as the interface between the physical and the digital world or we can discuss how IoT uses data to gain insights and optimize actions. But that’s really broad. As my colleague Kurt Seyfried wrote me: "Saying IoT is like 'Shipping of Things,' aka every business using the post/delivery system... It's too generic to be useful."

As I wrote after the MIT Enterprise Forum’s Connected Things 2016 event: "IDC’s Vernon Turner admitted that "It is a bit of a wrestling brawl to get a definition.” (For those who don’t know IDC, they’re an analyst firm that is in the business of defining and sizing markets so the fact that IDC is still trying to come to grips with various aspects of defining IoT is telling.) 

We’ve had something of the same issue with cloud. (I also wrote “Just don’t call them private clouds” in 2009.) And I imagine that, at the end of the day, we’ll muddle through in more or less the same way. But it’s worth at least observing that consumer wearables and smart home devices have little in common with industrial IoT solutions. For that matter, it’s unclear the degree to which solutions associated with healthcare, retail, agriculture, “smart cities,” and logistics/transportation will have in common beyond some sensor technology. 

Wednesday, May 25, 2016

Issue #4 of my newsletter is live

This issue has links to an article I recently had published on public cloud security as well as to discussions around using Ansible with docker-compose and why it's important to orchestrate containers using tools such as Kubernetes.

Links for 05-25-2016

Thursday, May 19, 2016

Data, security, and IoT at MIT Sloan CIO Symposium 2016

As always, the MIT Sloan CIO Symposium covered a lot of ground. Going back through my notes, I think it’s worth highlighting a couple sessions in particular—in addition to the IoT birds of a feather that I led at lunchtime. They all end up relating to each other through data, data security, and trust.

Big Data 2.0: Next-Gen Privacy, Security, and Analytics moderated by Sandy Pentland of the MIT Media Lab

There were two major themes in this panel.

Sandy Pentland

The first was that it’s not about the size of the data but the insights you get from it. This is perhaps an obvious point but it’s fair to say that there’s probably been too much focus on how data gets stored and processed. These are important technical questions to be sure. But they’re technical details and not the end in itself.

I might be more forgiving had I not lived through the prior data warehousing enthusiasm of the mid- to late-1990s. As I wrote five years ago: "There are many reasons that traditional data warehousing and business intelligence has been, in the main, a disappointment. However, I'd argue that one big reason is that most companies never figured out what sort of answers would lead to actionable, valuable business results. After all, while there is a kernel of truth to the oft-repeated data warehousing fable about diapers and beer sales, that data never led to any shelves being rearranged."

However, the other theme is newer—or at least amplified. And that’s ensuring the security of data and the privacy of those whose data is being stored. One idea that Sandy Pentland discussed is the idea of sharing answers (especially aggregated answers) rather than raw data. See enigma.mit.edu as an example of a system that's designed to make it possible for parties to use and maintain data without having full access to that data. Pentland also noted that because systems such as this make it possible to securely ask questions across jurisdictional boundaries, they could help address some of the often conflicting laws about the treatment of personally identifiable information.

Getting Value from IoT

At my luncheon BoF table, we had folks with a diverse set of IoT experiences including Ester Pescio and Andrea Ridi of Rulex Analytics, Nirmal Parikh of Digital Wavefront , and Ron Pepin, a consultant and former Otis Elevator CIO. The conversation kept coming back to value from data. What data can you gather? What can you learn from it? And, critically, can you do anything with that data to create business value?

Per my earlier comment about data warehouses, gathering the data is relatively straightforward. It may not be easy, especially when you’re dealing with sensors that aren’t on your own property and therefore need dedicated networks of some sort. But the problems are mostly understood. It’s “just" a case of engineering cost-effective solutions.

But what data and what questions? Ron Pepin shared his experiences from Otis. Maintenance is a big deal for elevators. It’s also the main revenue stream; the elevators themselves are often a loss leader. Yet proactive elevator maintenance mostly consists of preventative maintenance on a fixed schedule. 

Anders Brownworth, Principle Engineer Circle, on Blockchain panel

It seems like a problem tailor-made for IoT. Surely, one can measure some things and predict impending failures. But it’s not obvious what combination of events (if any) are reliable signals for needed maintenance. There’s a potential for more intelligent and efficient maintenance but this isn’t a case where you can cost effectively just instrument everything—someone else owns the building—and the right measurements aren’t obvious. Is it number of hours, number of elevator door reversals, temperature, load, particular patterns of use, something else, or none of the above?

The Blockchain

Given the level of hype around blockchain, perhaps the most interesting thing about this panel by Christian Catalini of MIT Sloan was the the lack of such hype.

Interest, yes. Catalini described how blockchain is an interesting intersection of computer science, economics & market design and law. He also argued that it can not only make things today more efficient (which could potentially redefine the boundary of firms by reducing transaction costs) but also create new types of platforms.

That said, there was considerable skepticism about how broadly applicable the technology is. Anders Brownworth of Circle (which has a peer-to-peer payment application making use of blockchain) said that the benefits of blockchain are broadly in the area of time-based transactions, with interoperability, and with many able to audit those transactions. However, with respect to private blockchains outside of finance, “we trust all the people around the table anyway” and, therefore, the audibility that’s inherent to blockchain doesn’t buy you much.

In the same vein, Simon Peffers of Intel agreed that it’s "hard to let thousands of users have the same view of data with a traditional database. But some blockchain use cases would fit with traditional database.” He added that "There is a space for smaller consortiums of organizations that know who the parties are with other requirements that can be implemented in a private blockchain. Maybe you know who everyone is but don't fully trust them."

To sum up the panel: You’re usually going to be giving up some features relative to a more traditional database if you use blockchain. If you’re not making use of blockchain features such as providing visibility to potentially untrusted users, it may not be a good fit.

Photos (from top to bottom):

Sandy Pentland, MIT Media Lab

Anders Brownworth, Principal Engineer, Circle

Tuesday, May 10, 2016

Links for 05-10-2016

My newsletter experiment

There’s a certain range of materials–curated links to comment upon, updates, and short fragments–that to me have never felt particularly comfortable as blog posts or on twitter. Tumblr never quite did it for me and I’ve little interest in shoving content into yet another walled garden anyway. I’ve been thinking about trying a newsletter for a while and, when Stephen O'Grady joined the newsletter brigade, I figured it was time to give it a run. We’ll see how it goes.

Here’s a link to the first issue: https://www.getrevue.co/profile/ghaff/archive/19505

It includes some DevOps related links and short commentary, links to a couple of new papers I’ve written on security and deploying to public clouds, and upcoming events including Red Hat Summit in San Francisco at the end of June. (Regcode INcrowd16 saves $500 on a full conference pass!)

You can also subscribe directly to this newsletter here.

The need for precise and accurate data

8266473782 fef433d94b k

Death by GPS (Ars Technica):

What happened to the Chretiens is so common in some places that it has a name. The park rangers at Death Valley National Park in California call it “death by GPS.” It describes what happens when your GPS fails you, not by being wrong, exactly, but often by being too right. It does such a good job of computing the most direct route from Point A to Point B that it takes you down roads which barely exist, or were used at one time and abandoned, or are not suitable for your car, or which require all kinds of local knowledge that would make you aware that making that turn is bad news.

It's a longish piece that's worth a read. However, it seems that a lot of these GPS horror stories--many from the US West--are as much about visitor expectations of what constitutes a "road" as anything else. It's both about the quality of the underlying data and its interpretation, things that apply to many automated systems. 

According to Hacker News commentator Doctor_Fegg:

This is clearly traceable to TIGER, the US Census data that most map providers use as the bedrock of their map data in the rural US, yet was never meant for automotive navigation.

TIGER classes pretty much any rural "road" uniformly - class A41, if you're interested. That might be a paved two-lane road, it might be a forest track. Just as often, it's a drainage ditch or a non-existent path or other such nonsense. It's wholly unreliable.

But lest you think data problems are in any way unique to electronic GPS systems, read this lengthy investigation into a 1990s Death Valley tragedy.

For what it’s worth, I did some cursory examination into what Google Maps would do if I tried to entice it into taking me on a “shortcut” through the Panamint Mountains in western Death Valley. My conclusion was that it seemed robust about not taking the bait; it kept me on relatively major roads. However, if I gave it a final destination that required taking sketchy roads to get there (e.g. driving to Skidoo), it would go ahead and map the route.)

After writing this, it occurs to me that for situations such as this, we need data that is both accurate (represents the current physical reality) and precise (describes that physical reality with sufficient precision to be able to make appropriate decisions).