Friday, December 18, 2015

Review: Beddit Smart Sleep Monitor

PXVEbkGGxeYbmzIwslH6 h3i9pmYrLgw7v4KK21qxXg YU0nKFpde0IFjuavcL5PyO1W8uQSjLhmvpX BPgpMFQ HglzCg EyzvJBpBVcQft91iM3ZDxxRL3qwtT f3pEPY

Much of the focus on activity and health trackers is on wearables. Think Fitbit, Apple Watch, and so forth. However, arguably, this isn’t the best approach for sleep tracking given that such devices need to be plugged in every few days or so and nighttime is the most logical time to do so. 

The $150 Beddit Smart offers an alternative. It’s a thin strip that lays across your mattress, plugs into a USB port for power, and interfaces with iOS and Android apps over Bluetooth. In my testing, the strip worked even if it was underneath some amount of padding—a featherbed in my case. (The photo is a bit misleading; normally the sensor would be under at least a sheet.)

The strip is a force sensor which measures mechanical cardiac activity using ballistocardiographs (BCG). According to the company, "Each time the heart beats, the acceleration of blood generates a mechanical impulse that can be measured with a proper force sensor.” By contrast, sleep clinics generally use polysomnography (PSG), which is basically a fancy way of saying that they use a variety of data from different sources to measure things like brain waves and eye movements.

This brochure provides more details about the science behind the device. While I certainly didn’t have the equipment to personally calibrate results against medical equipment, the Beddit’s data appeared to be at least roughly consistent with the readings from my Fitbit Charge HR over the same period. The Beddit, however, provides more detailed tracking of how much you’re moving around. In conjunction with a smartphone, it can also track snoring. (The Fitbit relies on data from its accelerometer which can only measure comparatively gross movements.) 

If you really want to geek out, here’s a PhD thesis by Joonas Paalasmaa, the CTO and Chief Scientist of Beddit, from the University of Helsinki about monitoring sleep with force sensor measurement. A force sensor is a thin and flexible force sensing resistor in an electrical circuit. When the force sensor is unloaded, its resistance is very high. When a force is applied to the sensor, this resistance decreases. Various techniques can then be applied to the resistance data to infer heart rate and respiration, which can then be correlated to sleep state.  

Part of the motivation behind using the ballistocardiograph approach is that it can be practically implemented in a consumer product, while providing more detailed information than a worn accelerometer can provide. At the same time, as Paalasma’s thesis notes, mainstream sleep monitoring systems require the use of wearable sensors that can degrade the quality of sleep. "The unobtrusive measurement approach is particularly attractive for long-term use at home—even months or years—because the sensors are not expensive and no discomfort is caused to the user."

The sensor is unobtrusive. You do need to be sleeping on it though so if you move around a king-size bed, you’ll lose some results. Over the course of my testing, there were a couple of times during the night without data—presumably because I rolled off the sensor. Other than this reality inherent in a device you’re not wearing, I didn’t find anything about the device that didn’t work as advertised.

So, are you a potential customer for this?

If you have a desire to specifically track sleep, I’m inclined to give this device the nod over a Fitbit Charge HR (which is the wearable I have personal experience with). The fact that you can pretty much plug in the Beddit and forget about it gives it an advantage over a wearable that needs to be taken off and recharged every few days. Furthermore, the sleep data is more detailed in the case of the Beddit and is directly based on academic scientific research. The downside is that fitness bands track more than sleep and also aren’t constrained to being installed on a single bed. 

ASyUHKVybTQd8T668pBFp eFdKu9cWHw09FCdFR 25k

The broader question, and it’s one that I have about many wearables, boils to something like this. OK, now that you’ve had a few days of fun and looked at your graphs, so what? How quantified does your life really need to be?

One answer of course is not very. The Beddit Smart tells me I generally sleep pretty well. But I pretty much knew that. (And my Fitbit tells me that I sometimes sleep poorly when I travel. I knew that too. It also tells me that when I spend all day writing, I don’t walk enough. Sadly, I know that as well.)

On the other hand, I could certainly see someone who doesn’t think they’re sleeping well finding a device like this a relatively inexpensive way to get some data before taking more serious steps to get to the root of the problem. The CDC estimates 50 to 70 million Americans have a sleep or wakefulness disorder.

I guess you could also try to quantify the degree to which a cup of expresso after dinner ruins your sleep although, like fitness tracking generally, I tend to be rather less systematic about such things.

Bottom line: Especially if you don’t already have a fitness band that tracks sleep, the Beddit Smart worked as advertised and is a good choice if you want to quantify your sleep patterns.

Disclaimer: The company provided me with a review unit but the opinions expressed in this review are strictly my honest evaluation of the product.

Tuesday, December 15, 2015

Links for 12-15-2015

Thursday, December 03, 2015

Presentation: The new distributed application infrastructure (SDI Summit 2015)

Today’s workloads require a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker (or other packaging methods), distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed, scalable applications, work with a wide variety of open source packages, and provide a universally understandable interface for developers and administrators worldwide.

This is the keynote I gave at the SDI Summit in Santa Clara on December 2, 2015. It discussions the evolution from essentially server-centric infrastructure to a more dynamic containerized one. I'll discuss portions of this presentation in greater detail in future posts.

Wednesday, December 02, 2015

Links for 12-02-2015

Monday, November 30, 2015

DevOps initiatives shouldn't just touch the new stuff

Abstracts all

Although I feel as if it’s been dispelled to a significant degree, there lingers the misperception that DevOps [1] is mostly for companies that sport ping-pong tables and have free sushi for lunch. Firms that manufacture construction equipment and have large swaths of legacy computer code? Not so much.

It’s not particularly surprising that this misperception exists. A traditional IT organization glances at a company like Netflix and they may see a unicorn wholly unlike themselves. They’re not even entirely wrong. More extreme implementations of approaches such as microservices or near-continuous production releases likely won’t become the norm—especially in the “classic IT” (aka Mode 1) parts of their infrastructure. However, that doesn’t mean DevOps principles can’t also benefit the conservative IT of conservative firms.

It’s about the software

The first reason that DevOps practices apply outside of greenfield, cloud-native (aka Mode 2) IT is that the rules are changing. The “software is eating the world” meme has become something of a cliche but it’s no less true for that. As my colleague James Labocki wrote in a recent post, "Bank of America is not just a bank, they are a transaction processing company. Exxon Mobil, is not only an oil and gas company, they are a GIS company. With each passing day Walgreens business is more reliant on electronic health records.” Furthermore, as James also noted in that post, these shifts in technology and how business is transacted are creating new competitors that come at you from non-obvious directions and places. 

Therefore, while the priorities for classic IT may be different from those of cloud-native, it still needs to change. I’ll go so far as to say that calling this “legacy” is a potentially dangerous turn of phrase as it implies static and in need of wholesale replacement. In fact, to quote James one last time:

In mode-1 they [IT] are looking to increase relevance and reduce complexity. In order to increase relevance they need to deliver environments for developers in minutes instead of days or weeks. In order to reduce complexity they need to implement policy driven automation to reduce the need for manual tasks.

 Getting there requires DevOps tools and approaches (together with policy-based hybrid cloud management).

DevOps thinking is proven to work in traditional industries

I thought DevOps was pretty new, you cry! In some ways, DevOps as we usually talk about it today is indeed the child of pervasive open source, continuous integration technologies, platform-as-a-service (PaaS), software-defined infrastructures, and a host of other relatively modern technologies. However, as Gartner points out in “DevOps is the Bimodal Bridge” (paywall):

Mode 1 organizations can use systems thinking for incremental improvements, such as reductions in waste and improved risk mitigation. While DevOps has embraced these methodologies, the concepts have, in fact, decades of real-world application in manufacturing and other industries.

(Here's one version of a presentation I give from time to time about the lessons from manufacturing for DevOps on Slideshare.)

Gartner also maintains that: "there are many elements in DevOps that may, in fact, apply across the modal spectrum. It is our firm belief that by 2020, at least 80% of the practices identified with DevOps and Mode 2 will be adopted by traditional Mode 1 groups for the overall benefit of the organization."

The need to work across IT

One number from a recent IDC InfoBrief sponsored by Red Hat “DevOps, Open Source and Business Agility: Lessons Learned from Early Adopters” (June 2015) popped out for me even in the context of multi-modal IT.

A majority of organizations (51 percent) don’t plan to have a dedicated DevOps organization. (36 percent do and 13 percent were unsure.) From my perspective, this is mostly a positive result. While dedicated organizations may suggest commitment and focus, they can equally mean stovepiped projects that don’t address the needs of or solve problems for the mainstream IT organization. As a result, their scope may be limited and fail to integrate with core IT systems. 

As Cameron Haight notes in another Gartner research note: "Initial DevOps toolchains are often focused on tactical integration scenarios, thereby restricting the ability to develop more flexible, general-purpose architectures."

Even when it makes sense to initiate DevOps as a pilot project, it’s important to keep attention (of both management and the DevOps folks doing the hands-on work) focused on the end business benefits, which should be the ultimate drivers. In the aforementioned IDC InfoBrief, employee productivity and business revenues were seen as important DevOps business impacts. But the #1 impact? Increase customer satisfaction and engagement. You’re not going to achieve that with a project touching a small portion of your IT. 

[1] Here’s how we define DevOps at Red Hat. 

DevOps is an approach to culture, automation, and platform design for delivering increased business value and responsiveness through rapid, iterative, and high-quality IT service delivery. It applies open source principles and practices with: 

  • Culture of collaboration valuing openness and transparency
  • Automation that accelerates and improves the consistency of application delivery
  • Dynamic software-defined and programmable platform

Friday, November 20, 2015

My fave carry-on luggage

Luggage

I travel a lot. Sometimes too much. And I get asked by a lot of friends and acquaintances about gear and other preferences. I’ve been meaning to write some of this down for a while. Here’s my start.

Let’s start with my biases. I avoid checking luggage whenever possible, which is mostly any week-long business trip to start with and many other cases as well. I consider roll-aboards to be the instrument of the devil for anyone who is otherwise physically able to carry a shoulder bag or doesn’t have another specific need. They hog overhead space and trip you up on concourses. You should require a handicapped sticker to use one. 

So soft luggage. Carry-on. What are my preferred options?

My go-to for business travel is the Patagonia MLC. (MLC = Maximum Legal Carryon) It’s got a nice shoulder strap as well as some thin backpack straps. Bomber zippers. A couple of outside compartments suitable for typical travel gear like pens, earphones, Kleenex, etc. My friend and former colleague Stephen O’Grady has called it the perfect luggage. 

I don’t go that far. A couple demerits:

The primary thing that I find sub-optimal about the MLC is that it divides the main compartment vertically. I find that this makes it difficult to pack rectangular or square-ish shapes or even bulky shoes. I get the desire to create separate zones in luggage but generally I’d just as soon use stuff-sacks, laundry bags, Eagle Creek cubes, or even a supermarket plastic bag within a larger space to separate dirty clothes and the like.

The zippers are also wrap-around. This makes it somewhat easier to squeeze in tight loads but it also makes it easier for casually closed zippers to shed contents in the middle of an airport. 

I’d also note that the thin backpack straps are intended for carrying modest loads for modest distances. But the MLC isn’t really intended as a “travel backpack.” It’s a reasonable tradeoff given that the backpack straps are not the focus of the luggage.

An alternative that I also use regularly is the Osprey Porter 46, which is much more explicitly in the vein of a travel backpack without silly distractions like the wheels or rigid hunks of material that many products in the category sport. While I wouldn’t want to carry it on my shoulders were it filled with lead weights, the shoulder straps are reasonably padded and it also includes a hip belt. Like the Patagonia bag the zippers and general quality are all solid.

While not rigid, the Osprey pack does loosely hold its shape. It’s primarily one large compartment although there’s a zipper at the top to a small compartment that basically takes its volume from the main compartment. As noted with the Patagonia, I’m generally good with the flexibility of this approach.

The Osprey is very much a travel backpack. It has a well-made padded handle but there’s no shoulder strap and it’s not really designed to be carried other than as a backpack. I generally take the Osprey when I know I’m going to be schlepping my luggage around a lot on foot, while I take the Patagonia on a more typical business trip.

There’s also an Osprey Porter 65, which has a 65L volume rather than a 46L volume but is otherwise identical to the smaller model. This bag is not airline carryon compliant but is typically fine for trains. Now, I’m certainly not going to encourage people to take oversized bags on planes, but I would note that this is a relatively soft compressible bag so it can generally be put in an overhead if it’s only partially filled. I’ve done this at times when I’ve wanted the extra space at my destination to consolidate my laptop etc. bag in my luggage for walking around cities or traveling on trains or when I’ve wanted some extra space for purchases that I can then check for the trip home. 

Links for 11-20-2015

Survey says: Who owns DevOps strategy?

Screen Shot 2015 10 20 at 8 57 29 AM

I’ve previously written about the overall results from IDC’s “DevOps, Open Source and Business Agility: Lessons Learned from Early Adopters” InfoBrief study sponsored by Red Hat. I encourage you to take a look as there’s a lot of interesting data about enabling technologies, Platform-as-a-Service (PaaS), open source, and desired software support models.[1] This post though dives into a specific result that ended up on the cutting room floor when the final InfoBrief was edited.

"Of the following stakeholder groups, which has the primary responsibility for driving your organization's DevOps strategy?"

The plurality but not the majority (38 percent) said that traditional application development teams had the responsibility. Other common answers included traditional IT operations teams (19 percent), dedicated DevOps teams (17 percent), and corporate C-level executive teams (13 percent).

I don’t find those overall numbers particularly surprising. DevOps tends to be thought of as being more about accelerating application development and release cycles than streamlining infrastructure operations.[2] So it’s pretty natural that devs would be seen as driving an initiative that most directly impacts them. (That said, in another survey question, 47 percent said that IT operations staff efficiency/productivity improvement was a primary DevOps goal so there are absolutely both dev and ops benefits.)

I might have expected to see more dedicated DevOps organizations driving strategy, at least in today’s early going. [3] However, our internal experience at Red Hat is that dedicated organizations can end up operating independently of the existing IT organization—making it hard to tie into existing apps and infrastructure. Therefore, I find the fact that early adopters are mostly viewing DevOps as something to be driven as part of mainstream IT rather than as an off-to-the-side project a good thing.

Slice the data based on how app devs answered and how IT ops answered though and things get interesting (if still not wholly unexpected).

It’s apparently quite obvious to your average developer who is or ought to be running the DevOps show. They should (76 percent) with another 10 percent allowing for the possibility of a dedicated organization driving the strategy. A mere 3 percent have IT ops driving things.

How did IT Ops answer? Well, they’re even more certain than devs that their counterparts shouldn’t be running DevOps with only 2 percent saying that traditional application development organizations have the primary responsibility for driving DevOps strategy. Beyond that near-unanimity though, they’re pretty divided. Only 34 percent said the traditional IT operations team should be in charge. Other responses were split between a dedicated team (24 percent), a corporate C-level executive team (21 percent), line of business decision makers (7 percent), or even a service provider like a system integrator (9 percent).

Pretty much anyone except their own developers I guess.

[1] Survey respondents were 220 IT decision makers in the US and UK who were either currently using DevOps in production or evaluating/testing DevOps.

[2] I’d argue that this dev-centric view isn’t the best way to think about DevOps, but it’s common.

[3] Note, however, that this question was specifically about who is driving or will drive strategy. A materially higher number (35 percent) have or plan to have a dedicated DevOps organization. That organization apparently just won’t drive strategy in many cases.

Tuesday, October 20, 2015

How open source is increasingly about ecosystems

Fish ecosystem by nerdqt87

When we talk about the innovation that communities bring to open source software, we often focus on how open source enables contributions and collaboration within communities. More contributors, collaborating with less friction.

However, as new computing architectures and approaches rapidly evolve for cloud computing, for big data, for the Internet-of-Things, it’s also becoming evident that the open source development model is extremely powerful because of the manner in which it allows innovations from multiple sources to be recombined and remixed in powerful ways. Consider the following examples. 

Containers are fundamentally enabled by Linux. All the security hardening, performance tuning, reliability engineering, and certifications that apply to a bare metal or virtualized world still apply in the containerized one. And, in fact, the operating system arguably shoulders an even greater responsibility for tasks such as resource or security isolation than when individual operating system instances provided a degree of inherent isolation. (Take a look at the fabulous Containers coloring book by Dan Walsh and Máirín Duffy for more info on container isolation.)

What’s made containers so interesting in their current incarnation—the basic concept dates back over a decade—is that they bring together work from communities such as Docker that are focused on packaging applications for containers and generally making containers easier to use with complementary innovations in the Linux kernel. It’s Linux security features and resource control such as Control Groups that provide the infrastructure foundation needed to safely take advantage of container application packaging and deployment flexibility. Project Atomic then brings together the tools and patterns of container-based application and service deployment. 

We see similar cross-pollination in the management and orchestration of containers across multiple physical hosts; Docker is mostly just concerned with management within a single operating system instance/host. One of the projects you’re starting to hear a lot about in the orchestration space is Kubernetes, which came out of Google’s internal container work. It aims to provide features such as high availability and replication, service discovery, and service aggregation. However, the complete orchestration, resource placement, and policy-based management of a complete containerized environment will inevitably draw from many different communities.

For example, a number of projects are working on ways to potentially complement Kubernetes by providing frameworks and ways for applications to interact with a scheduler. One such current project is Apache Mesos, which provides a higher level of abstraction  with APIs for resource management and scheduling across cloud environments. Other related projects include Apache Aurora, which Twitter employs as a service scheduler to schedule jobs onto Mesos. At a still higher level, cloud management platforms such as ManageIQ extend management across hybrid cloud environments and provide policy controls to control workload placement based on business rules as opposed to just technical considerations.

We see analogous mixing, matching, and remixing in storage and data. “Big Data” platforms increasingly combine a wide range of technologies from Hadoop MapReduce to Apache Spark to distributed storage projects such as Gluster and Ceph. Ceph is also the typical storage back-end for OpenStack—having first been integrated in OpenStack’s Folsom release to provide unified object and block storage. 

In general, OpenStack is a great example of how different, perhaps only somewhat-related open source communities can integrate and combine in powerful ways. I previously mentioned the software-defined storage aspect of OpenStack but OpenStack also embeds software-defined compute and software-defined networking (SDN). Networking’s an interesting case because it brings together a number of different communities including Open Daylight (a collaborative SDN project under the Linux Foundation), Open vSwitch (which can be used as a node for Open Daylight), and network function virtualization (NFV) projects that can then sit on top of Open Daylight—to create software-based firewalls, for example. 

It’s evident that, interesting as individual projects may be taken in isolation, what’s really accelerating today’s pace of change in software is the combinations of these many parts building on and amplifying each other. It’s a dynamic that just isn’t possible with proprietary software.

Links for 10-20-2015

Wednesday, October 14, 2015

VMs, Containers, and Microservices with Red Hat's Mark Lamourine

Markl

In this podcast, my Red Hat engineering colleague Mark Lamourine and I discuss where VMs fit in a containerized world and whether microservices are really the future of application architecture and design. Will organizations migrate existing applications to new containerized infrastructures and, if so, how might they go about doing so?

Listen to MP3 (0:17:50)

Listen to OGG (0:17:50)

[TRANSCRIPT]

Gordon Haff:  Hi, everyone. This is Gordon Haff in Cloud Product Strategy at Red Hat. I'm here with my much more technical colleague, Mark Lamourine. Today we're going to talk about containers and VMs.

I'm not going to give away the punch line here, but a lot of our discussion is, I think, going to ultimately circle around the idea of "How purist do you want to be in this discussion?"

Just as a level set, Mark, how do you think of containers and VMs? I hesitate to say containers versus VMs. But how do you think about their relationship?

Mark Lamourine:  It's actually characterized pretty nicely in the name. A virtual machine is just that, you're emulating a whole computer. It has all the parts, you get to behave as if it's a real computer. You can treat it in kind of the conventional way with an operating system and set up and configuration management.

A container is something where it's just much more limited. You don't expect to live in a container. It's something that serves your needs, has just enough of what you need for a period of time and then maybe you're done with it. When you're done, you set it aside and get another one.

Gordon:  They're both abstractions, containers and VMs are both abstractions at some level.

Mark:  There are situations now, at least, where you might want to choose one over the other. The most obvious one is a situation where you have a long lived process or a long‑lived server.

In the past, you would buy hardware and you'd set up these servers. More recently, you would set up a VM system, whether it's OpenStack or whatever. You'd put those machines there.

They tend to have a fairly long life. You apply configuration management and you update them periodically, and they probably have uptimes on the order of hundreds of days. If you've been in a really good shop, most places have one with many hundreds of days uptime for services like that, for very stable, unitary, monolithic services.

Those are still out there. Those kinds of services are still out there.

Containers are more suited, at this point, for more transient systems, situations where you've actually got good, either where you have a short term question, some service you want to set up for a brief period of time and tear down. Because that service is really going to calculate the answer to some query or pull out some big data and then you're going to shut it down and replace it.

Or other situations where you have a scaling problem. This is purely speculation, but I can imagine NASA, when they have their next Pluto flyby or whatever, needing to scale out and scale back. In that case, you know that those things are transient, so putting up full VMs for a web server that's just passing data through may not make sense.

On the other hand, the databases on the back end, those may need either real hardware or a virtual machine, because they're going, the data is going to stay. But the web servers may come and go based on the load.
I see containers and VMs still as both having a purpose.

Gordon:  Now, you used "At this point" a couple of times. What are some of the things, at least hypothetically, containers would need to do to get to the point where they could more completely, at least in principle, replace VMs?

Mark:  One of the things I'm uncomfortable at this point putting out about containers is that people talk about containers being old technology. While that's true in a strict sense, we've had a container like things even back as far as IBM mainframes and MVS.
It's just recently, in the last three or four years, become possible to use them everywhere, and to use them in ways we've never tried before and to build them up quickly and to build aggregations and combinations.

We're still learning how to do that. We're still learning how to take what, on a traditional host or VM, would be, you put several different related services inside one box or inside one VM and you configure them all to work together and they share the resources there.
In the container model, the tendency is to decompose those into parts. We're still not really good yet at providing the infrastructure to quickly, reliably and flexibly set up moderate to large complex containerized services.

There are exceptions, obviously, people like Google and others. There are applications that work now at a large scale. But not, I think, in kind of the generalized way that I envision.

When that becomes possible, when we learn the patterns for how to create complex applications to take a database container off the shelf and apply three parameters and connect it to a bunch of others and have it be HA automatically, then I think there might be a place for that.

The other area is the HA part, where you can afford to have, you could create a long‑lived service from transient containers. When you've got HA mechanisms well enough worked out that when you do an update, if you need to do an update to a single piece, you kill off a little bit, and you start up another bit with the more recent version. You gradually do a rolling update of the components and no one ever sees the service go down.

In that case, the service becomes long‑lived. The containers themselves are transient, but no one notices. When that pattern becomes established when we learn how to do that well, it's possible that more and more hardware or VM services will migrate to containers. But we're not there yet.

Gordon:  I'm going to come back to that point in just a moment. I think one other thing that's worth observing, and we certainly see this in terms of some of the patterns around OpenStack is, we very glibly talk about this idea of having cattle workloads. You just kill off one workload, doesn't really matter and so forth.

In fact, there's a fairly strong push to build in, for instance, enterprise virtualization types of functionality into something like OpenStack, for example, so you can do things like "Live migration." Because, in fact, it's easy to talk about cattle versus pets, workloads. But like many things, the real world is more complicated than simple metaphors.

Mark:  Yes, and I think that the difference is still knowledge. We talk about cattle, we talk about having these large independent, I don't care, parts.

Currently, it's still hard to build for a small company, perhaps. It's hard to build something that has the redundancy necessary to make that possible. In a fairly small company, you're either going to go to the cloud or if you're moderate‑sized, you're going to have something in‑house.

The effort of making something a distributed HA style service for your mail system or for whatever your core business is, it's still hard. It's easier to do it as a monolith, and as long as the costs associated with the monolith are lower than the costs associated with starting up a distributed service, an HA service, people are going to keep doing it.

When the patterns become well enough established that the HA part disappears down into the technology, that's when I think more of this kind of, it might really be cattle underneath.

Gordon:  Right. We see a lot of parallels here with things like parallel programming and so forth, is that when these patterns really have become well established, one of the key reasons they have been able to become well established is that the plumbing, so to speak, or the rocket science needed to really do these things is being submerged in underlying technology layers.

Mark:  That's actually what Docker is. That's really what Docker and Rocket both have done. They've taken all of the work of LXC or Solaris containers and they've figured out the patterns. They've made appropriate assumptions and then they've pushed them down below the level where an ordinary developer has to think about them.

Gordon:  Coming back to what we were talking about a few minutes ago, we were talking a little bit about distributed systems and monoliths versus distributed services and so forth.

From a practical standpoint, let's take your typical enterprise today where they're starting to migrate some applications or designing new applications, which may be the more common use case here, and they want to use containers. There's actually some debate over how we start to develop for these distributed systems.

Mark:  You hit on two different things and I want to go back to it. One is the tendency to migrate things to containers, and the other one to develop new services in containers.

It seems to me there's a lot of push for migration as opposed to, there are people who are developing new things. It seems like there's an awful lot of push to migration. The people, people want to jump into the cloud. They want to jump into containers. They're like, "Well, containerize this. Our thing, it needs to be containerized," without really understanding what that means in some cases.

That's the case where, which direction do you go? Do you start by pulling it apart and putting each piece into containers? Or do you stuff the whole thing in and then see what parts you can tease out?

I think it really depends on the commitment of the people and their comfort with either approach. If you've got people who are comfortable taking your application and disassembling it and finding out the interfaces up front and then rebuilding each of those parts, great, go for it.

If, and this actually makes me uncomfortable, because I prefer to decompose it. But if you've got something where you can get it as a monolith or small pieces into one or a small number of containers and it does your job, I can't really argue against that. If it moves you forward and you learn from it, and it gets the job done, go ahead. I'm not going to tell you that one way is right or wrong.

Gordon:  To your earlier point, in many cases, it may not even make sense to move to a container, it's working fine in a VM environment. You don't really get brownie points by containerizing it.

Mark:  I think that it seems like there's a lot of demand for it. Whether or not the demand is justified, again, is an open question.

Gordon:  A lot of people, they use different terms, but use Gartner terminology as its mode 1 IT and this mode 2 IT. Really, the idea with mode 1 IT is, you very much want to modernize where appropriate, for example, replacing legacy Unix with Linux and bringing more DevOps principles into your software development and so forth. But you don't need to wholesale, replace it or wholesale migrate it.

Whereas, your new applications are going to be more developed in kind of mode 2 infrastructure with mode 2 techniques.
We've kind of been talking about how you migrate or move, assuming that you do. How about for new applications? There are actually even seems to be some controversy with various folks in the DevOps "movement" or in Microservices and so forth over, what's the best way to approach developing for, new IT infrastructures.

Mark:  Microservices is an interesting term because it has a series of implications about modularity and tight boundaries and connections between the infrastructure that...

To me, Microservices almost seem like an artificial, it's an artificial term. It's something that represents strictly decomposed, very, very short‑term components. I find that to be an artificial distinction. Maybe it's a scale issue, that I see services as a set of cooperating communicating parts.

Microservices is just an extreme of that, where you take the tiniest parts and set a boundary there and you then you build up something with kind of a swarm of these things.

Again, I think that we're still learning how this stuff works. People are still exploring Microservices, and they'll look back and say, "Oh yeah, we've done stuff like this with," I think it's SOA applications and SOAP and things like that.

But if you really look at it, there are comparisons, but there are also significant differences. I think the differences are sometimes overlooked.

Gordon:  One of the examples that I like to bring up is, there's a lot of attention paid to Netflix, for example, for which is their famously this super Microservices type of architecture.

But the reality is, there's other web companies out there, like, Etsy, for example, who are also very well known for being very DevOpsy. They speak at a lot of conferences and the like. They basically have this big monolithic PHP application. Having a strict Microservices architecture isn't necessary to do all this other stuff.

Mark:  It shifts your knowledge and your priorities. The Netflix model lends itself well to these little transient services. When a customer asks for something, I haven't watched their talks, but I'm assuming what that trigger is the cascade of lots of little apps that start up and serve them what they have. When they're done, those little services get torn down and they're ready for the next one.

There are other businesses where that isn't necessarily the right model. Certainly, as your examples show, you can do it either way. I guess each business needs to decide for themselves where the tipping points are for migration from one to the other.

Gordon:  Yeah. I think if I had to kind of summarize our talk here, and maybe it's a good way to close things out is, there are a lot of interesting new approaches here which certainly, at least there is some unicorns using very effectively. But it's still sort of an open question over the broader mainstream, the majority, late majority, slower adopters, how this plays out across a wider swath of organizations.

Mark:  I think what we have now is a set of bespoke, hand‑crafted, they're doing things at scale. At Netflix, they're doing things at a large scale. They had to develop a lot of that for themselves.
Now, it means that a lot of what used to be human intensive operations are now done automatically. That doesn't necessarily generalize.

That's where I think there's still a lot of work to be done, to look at the Netflix's, to look at the other companies that are strongly adopting Microservices, especially for, well, both for inside and outside services. Because you could say the same thing for inside a company.
I think over the next four or five years, we'll see those patterns emerge. We'll see the generalization happen. We'll see the cases where people identify, "This is an appropriate way and we've documented it, and someone's bottled it so that you can download it and run it and it will work."

But I think we're still a few years out from generic, containerized services, new ones, at the push of a button. Still requires a lot of custom work to make them happen.

Thursday, October 08, 2015

Containers: Don't Skeu Them Up (LinuxCon Europe 2015)

Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.

This session discusses how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.

This version of the presentation is significantly reworked from earlier. It excises much of the container background while adding discussion on microservices alternatives and services orchestration.

Links for 10-08-2015

Tuesday, September 29, 2015

Podcast: Making DevOps succeed with Red Hat's Jen Krieger

Krieger6107104

Red Hat Agile Coach Jen Krieger has extensive experience working with development teams on a wide range of projects. Currently, she’s focused on Project Atomic.  his experience has given her great insights into what makes teams and projects work and what roadblocks get in the way. In this podcast, Jen shares those insights as well as specific steps that you can take to make your DevOps initiatives successful.

Listen to MP3 (17:21)

Listen to OGG (17:21)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff at Red Hat with another edition of the "Cloudy Chat" podcast. We're going to go back to DevOps today. I'm speaking with Jen Krieger, who is an agile coach at Red Hat.

Our topic for today is, "What are the inhibitors to getting DevOps done right at your organization, at multiple levels within the organization?" Jen, welcome. What is the first thing that comes to mind that gets in the way of successful DevOps?

Jen Krieger:  If you go online, and you type in "Cultural blockers to DevOps" in the Google and look at that search, you'll probably see the majority of the answers are going to be somewhere in the realm of two words, the first one being "trust," and the second one being "empathy."

What I rarely see, or maybe I do, but perhaps not necessarily directed at an individual, or an executive is, what things people can do to change those things. As in, "How can I be more trustworthy of my co‑workers," or "How can I have more empathy?"

It's easy to just tell somebody to more empathetic, but maybe not so easy in individual situations, or maybe it's hard to remember those things.

One of the things I'd like to tell people in general, especially the individual contributors is, when we're talking about trust and empathy a lot of times, it's hard to get to that place. As in, you have a collaborator sitting next to you, and they're doing something that is making you unhappy, or they've done something that has blown up your work week, because they did something wrong.

It's hard for you to have that empathy, especially if there is a sense of competition in the workplace. I've been at many companies now. A large portion of them, there's always seems to be a limited number of jobs, limited number of promotions, limited number of salary increases. I've heard, "Well, we only have X number of percent for salary increases or bonuses this quarter, and you got more than so and so."

The conversation always seems to breed competition, making it less likely that I want to cooperate with somebody, because I want that additional money. What I think employers don't understand is, when they are putting their employees in that situation.

They're starting to allow for this situation occur, where the co‑worker or the individual was not only competing with the people around them, but they're also starting to compare the company that they're working with to other companies, even competitors.

You're not only breeding competition internally, but you're also breeding competition with the people outside of your workplace.

This is not an easy problem to solve. No one has unlimited money to do whatever they want with it. No one has an unlimited number of promotions.

There's not always spaces for everybody in management, or wherever they want to take their career. There are some things that we can do as an organization that alleviate some of those situations, to help everyone.

At the executive level, what I like to tell people is, "We need to start looking at the tech industry." There is an alarming trend sometimes, where we hire and promote technical people into management positions, because they're really great technologists.

Maybe not so great at being the kind of person, who is interested in mentoring an employee, or interested in employee development.

A lot of companies will do this thing, where they have almost like a one‑size‑fits‑all employment development plans. If you are somebody who likes a lot of hands‑on interaction is probably a good situation for you, because you'll have those touch points with your manager, and they're pretty structured.

If you're somebody who is more introverted or not interested in having that conversation, then it can be overwhelming for you to have that constant touch point with your manager. Having somebody who doesn't really understand employees and individual and needs an individual‑employee development plan or individualized attention is going to be hard.

Especially if that person is more interested in the technology side of things, versus the human interaction and development side of their job.

I also like to tell them that encouraging spot awards or some sort of system, where individuals can award individuals, or individuals can award executives or vice versa, where you just have a pool of money or a pool of points or some way to say, "Hey, thank you so much for what you did, " and to use that as a way to recognize people.

You can be an individual contributor, instead of having a verbal conversation saying, "Thank you for what you did," something that doesn't get tracked, something that your boss doesn't know about, or something that when the next promotion comes up, no one knows about.

Having those automated spot awards or some sort of system that's tracked for that, cannot only help teams feel little more cooperative with one another as in, "Hey, we're all working together to get something done."

Also, help management find introverts in their organization, who may not be speaking up and may not be saying, "Hey, look at what I did over here," they may be always had astound doing a fantastic job. Therefore, no one notices what it is they're doing for the organization. I would also encourage HR to be actively included in those conversations, because these are conversation rewards.

Gordon:  That's one specific, concrete thing that you've just been discussing there around peer rewards, and peer recognition. One of the things we often hear about DevOps is, it's important to have top‑down support for a DevOps initiative. Imagine you're talking to somebody at company Acme, at executive level. They really will drive these DevOps thing within their organization.

A couple of questions. First of all, what might circumstances be at Acme that you just tell them you're not ready to start today? Conversely, what might be some specific things that you tell them that you then really need to put in place, so that you can have a successful DevOps initiative?

Jen:  It's hard, because you have to ask that executive to be introspectful about what it is they're doing, and who they've actually employed. I would say, if you have an organization that is actively interested in withholding information from employees. I'm not talking like, "We're going to require this brand‑new company. We can't tell anybody right now."

If you're more like, "We have this important project, but we'll just keep that information to ourselves right now, and then surprise everybody later." If you're withholding information or not actively sharing what's going on, I would say it's going to be really hard for you to have success.

If you are an executive that speaks and acts inconsistently, so you say one thing to one person, and then something different to the next, is going to be hard to maintain a culture of trust, equally hard to maintain a culture of empathy, especially if you are pitting to people against one another for resources, money, hardware, anything that you could see would be needed to get their jobs done.

If you're not necessarily open to listening to ideas from people who report to you, or maybe you're close‑minded to change, as in somebody says, "Hey, I've got this idea," or "I think we might be doing this wrong," and you're not willing to even have the conversation, or you're threatened by that conversation, I would say probably this is not the right step for you.

Gordon:  What sort of change is at the individual level that really needs to be made, compared to what's more standard types of practices?

Jen:  Especially for the concept of trust and empathy, a lot of times, we as humans, struggle with the word "jealousy." We are so used to comparing ourselves to the other people around us. Like, "Someone's got a new car. I guess I should get one too," or "My friend got a new phone, I need to get a new phone too."

That's probably a really good place for an individual contributing to start is start thinking about jealousy, thinking about "Why did so and so get that job over me," or "Why did they get recognition and I didn't." It's definitely been a topic that I struggled with, when I was younger.

Certainly in my career, I was always wondering why somebody got a promotion over me, or why somebody got something that I, in the moment thought I wanted. What really helped me get over that whole concept was to set attainable goals for myself. The keyword there is "attainable." For me, it's not "I want to be CEO of company X, and I want to make half a million dollars a year."

It's "I want to go and pass a certification exam, so that eventually, I can get this promotion," or "Set a clear educational goal or personal improvement goal." One of my most recent goals is to work a no more than 45‑to‑50‑hour work week. What I do for that is, I keep an eye on how long I'm working during the week, so that I'm not exceeding that.

That's mostly so that I can also have personal goals that I'm trying to meet, or keep my work out of my personal life. Things like that, what that can help an individual to do is to, when you see somebody at work, "Hey, I have a co‑worker who is in a very similar role gets promotion." I say, "Gosh, darn it. I wish I had gotten that."

What I can do is look at my already written down goals and say, "Wait a minute. Here are the three reasons why that really wasn't what I wanted, because that promotion's may come with X number of additional hours and more responsibility. Yeah, maybe it will be coming with more money, but that wasn't part of my goal set."

Maybe I didn't have a goal to make more money in the moment. Having that pre‑written down listed goals, at least for me, it helps me to know whether or not, I really should be jealous or not. I shouldn't be like, "Oh gosh. I wish I got that job." In the situation in which I feel like I should, I was really was something that I wanted.

Jealousy, disappointment, all that kind of stuff comes with that. Then, I have to be really serious and honest with myself and just ask question, "What should I have done differently, so that I could have actually been that person who got what that other person wanted?" Honestly, these are not easy questions or things to work through.

It can be hard. Even for me, it's very hard for me to set those goals. I recommend that you find a friend or a mentor, or somebody in your field of profession. Maybe not necessarily directly as working at the company they're working on, because it might be harder to talk about things and blunt in real terms, if you're working at the same company.

Find somebody that you can talk to and help stick to the goals that you've written down, so you're not being lose about it.

Gordon:  You worked with teams, some of which have certainly been successful. I assume at various times in the past, maybe not so successful on projects. What are the characteristics that jump to mind as contrasting those two situations?

Jen:  I worked on things that has been complete disasters. I worked on projects that prevented any enhancements going to production for almost a year, which to me is a terrible situation to be in, when you're working with a company that needs to get enhancements into the product they're working on. They can't because of the technical situation you're in.

Even with that team, they were surrounded by just a tremendous problem, which is the reason why it took them so long. It was not an insignificant project to take on. There are two things that could have made them more successful. Technology today makes it a lot more attainable, which it perhaps was not back then. The first thing obviously for me is always teamwork.

I laugh, because every time I use that phrase, I always think of "The IT Crowd," and there is a section that I absolutely love. I recommend everybody go and watch it. Just type in YouTube IT Crowd team, team, team, and you will pull up that video. It's interesting to me, because I'm kind of snarky about it, the whole concept of teamwork.

I've also seen teams that can work together so well, that even when they're faced with a project that is absolutely going to fail, they still seem to make progress or still seem to recover from that failure, much faster than a team that had a failure, and was catastrophic to their work, their working environment, their work relationships, and everything that goes into it.

What I'd want to see is a situation where even in a failing team, it did not take them weeks and months to get back to those state in which they were working at before the failure. That they can just say, "OK. We failed, no big deal. Tomorrow is a new day. We're going to see how we can prevent that from happening in the past."

That teamwork, that trust, the empathy, the lack of finger pointing, is a critical component of success on projects. I had a second one and I cannot sip in my minds right now. perhaps we'll some back, if that comes back to me.

Gordon:  In the interest of Internet, top five list or top 10 list or what not, briefly, what are five things if you're thinking about starting a DevOps project? What should you should put in place today, tomorrow, next week, to make sure you're successful.

Jen:  For me, the number one thing would be to look into blameless retrospectives or pre‑mortems. What are you going to do, if something goes wrong? Have a plan before it goes wrong. Have a conversation before release. Say, "What are all the things that could be gulches, and come up with a plan to address them?"

A lot of times, project managers call those "risk analyses." It can be off‑putting when you call it a risk analysis, but do them. It doesn't have to be to the level that some project managers will do it. Have the conversation. It's probably a good idea to have that culture in place before something goes wrong.

Look into if you have a management structure in place that is firm, and there's not a whole lot of employee development going on. Look into training to see if there's something that you can do in your organization, to help guide managers in a direction that will encourage them to look at their employees as individuals, and try to change the way that you are mentoring people internally.

Individuals have a list of goals. Sometimes, those goals are going to guide you out of the company you're working at. That can be quite scary, but have your personal goals. Make sure that you know individually what it is that you are going to be comfortable doing, specifically in the case of...for me, sometimes when you are going into a situation, where you're saying, "I'm going to do "DevOps. My organization's going to be great." 

Sometimes, that comes with the reality of you're going to have to do the right thing. The right thing is not always going to be popular. Make sure you understand where your boundaries are before you get to that point, so is your boundary. "I'm going to do the right thing, until I get fired, because of it," or "I'm going to do the right thing until this point in times happens, and then I'll let it go."

Just know what your boundaries are as an individual. The final one is, make sure that you have a system in place to do some sort of awarding, some sort of recognition that is not manager down recognition, because manger down recognition if you're already having a problem with your managers, who are not interested in employee development. It's not that they're not interested, but they don't know how to.

It's going to be really hard to encourage a system of collaboration or teamwork, if there's no way for a team member to say thank you to another team member other than just saying thank you. Thank you's are great, but having some way to track. Even monetary words are great. Something is better than words. Those are my top five, right now.

Gordon:  Great. Thanks, Jen. Anything else you'd like to add in closing?

Jen:  Nope. That's it.

Carmen DeArdo of Nationwide talking DevOps

There’s a nice interview with Carmen DeArdo, DevOps technology leader for Nationwide Insurance, over at The Enterprisers Project. The whole thing is worth reading but here’s one highlight:

So a key learning is that while technology is great, if you don’t have an efficient end-to-end process that provides continuous visibility of the work as it progresses and also reduces variance, technology will only provide limited value to improving speed and efficiency of what you deliver to your customers.

This is an ongoing theme around DevOps. Modern open source toolchains do matter. But so does having processes that enable automation and provide continuous rapid feedback loops. (And, as Carmen mentions elsewhere in the interview, having the right people and culture is key as well. He says that "forming agile teams is a great way to start."

DevOps on the Red Hat Developer Blog

Monday, September 28, 2015

Links for 09-28-2015

Wednesday, August 05, 2015

Bluetooth beacons and the (potential) creep factor

GimbalS10 large

I’ve started fooling around with Bluetooth beacons recently. Here’s a story about how Target is planning to pilot beacons in a retail setting (which is one of the most common use cases:

 During Target’s testing period, capabilities are limited to surfacing deals and recommendations based on what section of the store a customer is in: A two-for-one deal on Tylenol pops up when a shopper hits the pharmacy, or a recipe for banana bread appears while walking through the fresh fruit section. Target has plans to add features like reorganizing a shopping list based on the most efficient route through the store, and pushing a reminder if you forgot anything on that list once you hit the checkout line, but these will not be available at launch.

Such applications seem potentially useful: coupons/specials based on the department, "I need some help now," etc. They can also start to inch over the line of creep even if, as is commonly the case as it is here, the consumer needs to install and enable a smartphone app to use the beacon services (and be tracked in the store). Imagine having your path through the store loaded into a database, correlated with purchases, your demographic information, and other information from third-party databases to deliver a "personalized experience.”

At one level I don’t worry too much and even vaguely regret that we’re generally pretty bad at targeting advertising anyway. From the comments of this Technology Review article on Facebook a few years back:

A well-known dirty little secret in the advertising world is that, even after millennia of advertising efforts, not a single copywriter can tell you with any confidence beyond a coin flip whether any given advertisement is going to succeed.  The entire "industry" is based on wild-assed guesses and the media equivalent of tossing noodles against the kitchen wall to see what might stick, if anything.  It doesn't matter whether it's print, TV, or on-line media, no one can predict what will actually work.  FB engineers are probably even less well-equipped intellectually than the average ad hack in being able to come up with a better mousetrap to get people to buy what sellers want to hawk.

That said, companies are often somewhere between mildly and completely oblivious about how consumers perceive targeted advertising and other forms of personalization. As Lisa Barnard of Ithaca College noted in a recent study

Previously, targeted ads were based on larger demographic descriptions, such as age or hobbies. But now, with personal information scattered across databases, marketers are able to create more specific consumer profiles. That's what consumers find creepy and what marketers are failing to consider, Barnard said. Even millennials, most of whom are digital natives, are bothered by this extreme customization, she added.

And take a gander at this clip from Minority Report. It may approach satire but I’ve seen perfectly serious “thought leadership” videos from tech companies that weren’t much different.

(For a different and wholly non-creepy example of Bluetooth Beacon technology that uses beacons as the mobile element of an application—they’re more typically used as stationary devices tied to a particular location--take a look at this writeup of a demo Burr Sutter put together for Red Hat Summit.) 

Links for 08-05-2015

Thursday, July 30, 2015

Links for 07-30-2015

Podcast: Soft skills for DevOps with Red Hat's Jen Krieger

Krieger6107104
Jen Krieger is an Agile Coach on Project Atomic at Red Hat. In this podcast, she discusses soft skills, such as communication, that help software teams work better together and reduce unnecessary conflict. She also talks about how software development, especially in open source environments, is increasingly physically distributed and shares some tips for making remote teams work more effectively together.

Show notes:
MP3 audio (17:06)
OGG audio (17:06)

Monday, July 20, 2015

Links for 07-20-2015

Friday, July 17, 2015

Links for 07-17-2015