Wednesday, July 23, 2014

Iris on Great Wass Island in Maine

Great Wass Island, Maine

I took some photos with my new 12-24mm Sigma lens on my Canon 5DIII before my post-Asian fever brought me down.

Links for 07-22-2014

Wednesday, June 18, 2014

Links for 06-19-2014

Friday, June 13, 2014

PaaS: Lessons from Manufacturing

I wrote this presentation for Cloud Expo 2014 in NYC on June 11. I plan to make a narrated version available one of these days but I'm taking off on some travel and I promised I'd make the slides themselves available after the conference.

Here's the abstract:

Software development, like engineering, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of what's typically involved when moving an application from prototype to production and, indeed, maintaining the application through its lifecycle remains craftwork. In this session, Red Hat Cloud Product Strategist Gordon Haff discusses how a Platform-as-a-Service (PaaS) like Red Hat OpenShift can bring industrialization to the development and deployment of applications. By abstracting irrelevant details and automating key activities, a PaaS can do for software development productivity and quality what assembly line innovations did for manufacturing.

Thursday, June 12, 2014

Podcast: Internet of Things with Red Hat's James Kirkland


James Kirkland, Chief Architect for Intelligent Systems at Red Hat, discusses the material economic impact that the "industrial Internet of Things" can bring to many businesses. In particular, James discusses a number of the specific scenarios in which transportation companies, such as railroads, are looking to IoT in order to dramatically improve the efficiency of their operations. James caps off this discussion with a look at IoT security.

Links:
Red Hat Embedded Program

Listen to MP3 (0:17:37)
Listen to OGG (0:17:37)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff here at Cloud Expo 2014. I'm here with James Kirkland, who's the Chief Architect for Intelligent Systems and the Internet of Things [at Red Hat], which is even more buzzwords than I have in my title. Welcome, James.
James Kirkland:  Thank you very much. Thanks for having me on.
Gordon:  James, let's talk about the business angle of Internet of Things. There's probably been a lot of attention paid to consumer‑type stuff. That I could put up my window shades without having to get up from my sofa, but that's probably not where money is with Internet of Things. How are we going to make money off of this thing?
James:  We classify it as the industrial Internet of Things or the enterprise Internet of Things. The way they're going to be able to drive lower costs and higher asset utilization is by, I call it, "the data cycle."
They're going to gather data out from these sensors on the edge and in the Internet of Things. That data is going to be shuttled along into the back office and into the cloud. You're going to use data analytics applications to mine that for ways to optimize your system.
For example, the freight railroads in the United States, if they're able to increase their average fleet speed by one mile an hour, it's $256 million in profit a year.
They're always looking at ways to optimize the fleet speed, controlling with predictive analytics, track failures, equipment failures and things like that. They would rather pull equipment out of service earlier rather than having it fail on the line and stop traffic. It's things like that, the predictive analytics side.
The other side is looking for patterns where you can optimize flows and look for data where you see customer patterns or patterns within things. For example, with smart grid one of the issues is...I'll use an example of electric vehicles and charging those electric vehicles.
With electric cars, there are two challenges with that. One is they have to detect when customer are adding electric cars in particular neighborhoods and areas, so that they can provision additional capacity to handle that as far as transmission.
More importantly, they need to develop algorithms that are based on data so that, when they detect that they're in peak load in certain areas, they can then decide which charging stations they can shut off, which air conditioning units they can raise the temperature on, things like that.
It's the big data analytics on the backside, on the cloud, where they're going to be able to look at that aggregate data, where they did fail, be able to analyze it and find patterns so that next time that it happens, they can handle it in a more efficient manner, without having to provision additional generation capacity.
A lot of what they do is have emergency generation capabilities that are higher cost, like oil, that they turn on in these peak periods. If they can avoid that by shutting off things like the charging stations temporarily, it makes a big difference in their profitability.
Gordon:  Implicitly, it seems a common thread what you're talking about here is money is involved. Again, going back to my comment at the beginning, I think a lot of the popular press excitement is for other things that are very, very far off and very theoretical, or cool and neat and glitzy.
But, it's not clear that they necessarily relate to saving money or having a direct business impact. Not to put words in your mouth, but where the Internet of Things is really going to be exciting is when there's a direct, near‑term, achievable making more money.
James:  That's it exactly. You've got to find the use cases where there's going to be return on the investment. If there's not a return on the investment, having a refrigerator that glows green when you've got enough food or texts you when you're out of milk...Those are interesting applications, but I think that they're a flash in the pan.
The long‑term return on these things is improving profitability using the resources that you have, the best possible. Also, there's a societal thing which, especially when you look at transportation and smart grid, is environmental. Getting the most bang for your carbon buck, so to speak, and reducing emissions as much as possible.
There's a ton of different reasons why you would do it, but it comes back basically to profitability. That's completely it.
Gordon:  For one thing, when you talk about putting in new sensors, whether it's at the consumer level or whether it's at the business level, of essentially making investments, of getting rid of things that were there today, that are presumably working at some level.
Again, am I going to buy a new refrigerator because it glows red when I'm out of milk? Probably not, but if you're a railroad and investing in some new sensors pays for itself in a year, that makes a lot more sense.
James:  Right. From the railroad examples, it's really important for them to keep everything moving at speed and also to limit misdirections.
For example, it's really important for them to do that as a train comes into a switching yard. They have to determine through analytics what cars are part of that train, how to break the train apart, and then reassemble new trains from it.
There's some percentage of misdirection that happens. A car will get misdirected to the wrong city and has to be sent back, so there's customer dissatisfaction. There's the cost, there's the wear and tear.
We're doing things like working with the railroads to improve their car detection and routing programs based on the combination of new sensors being deployed in the field like you're talking about, replacing systems that have been there 20 or 30 years, and then using the analytics on the backside to develop new rules based on the analysis of where they went wrong and where the failures happened.
Gordon:  Let's talk a little bit more about those data analytics and about the learning algorithms and the like. Are these repurposing of the types of algorithms, the types of analytics software that we have today? Or is this a new area? Is this somewhere where research needs to be done, where product development needs to be done?
James:  I definitely think there are some algorithms that are out there today. A lot of the ones that exist today came out of research 10, 20 or 30 years ago in academia. There's growing interest today in finding new sets of rules that are relevant to these systems within the Internet of Things.
I think there's going to be a burgeoning, whether it's academic, whether it's in the private research labs or whatever. We need people researching how we find these optimizations, how we detect these patterns.
It gets back to ‑‑ you and I talked about this before ‑‑ machine learning is still in its infancy. We can find the easy patterns and we can program for them, but we really need something that is smart and diligent, is going to look for patterns in and of itself through this data, find them and then bring them to your attention.
That's an area that the academics need to mature over the next five years. Then it comes out of academia and gets productized like any of these things.
Gordon:  Do you see there being specific breakouts, or is this going to be the typical type of thing, whether there's a lot of blocking and tackling and incrementally knocking off particular use cases?
James:  I think it's going to be a lot of blocking and tackling. One of the companies that I work with does acoustic detection along fiber optic lines. There's essentially a dark fiber optic cable that runs along a railroad track or along a pipeline.
It has devices every few hundred yards that are listening on that for acoustic signatures of failures or of equipment breaking. The sound of a rail with microfractures in it is different than the sound of a rail that is sound.
They are going out into areas where there are rails. They're setting these up. They're recording them and then looking for failures, and seeing the signatures of that. But it's difficult because you have to take into account that with these things the humidity in the air, the type of geography, sound propagates differently.
In this case, it's a private company working in conjunction with a railroad industry body that's like an institute and going out to all these different geographies in places with different weather patterns and different humidities and learning what failure sounds like. Running railroad cars with a flat wheel or with a frozen bearing over these and finding the signature for them.
Gordon:  Internet of Things, I'm not sure that's something most people would really associate Red Hat with. Yet here we are at the Internet of Things Expo, Big Data Expo, Cloud Expo, all the buzzwords expo. What are some of the specific things that Red Hat is doing that touch on Internet of Things?
James:  We've got several areas that we bring something special to the Internet of Things. Obviously, we have Linux. Everybody knows that we have Red Hat Enterprise Linux. Red Hat Enterprise Linux fits within a subset of these devices that are out in the field.
It obviously fits within the cloud and the data center, but also controllers, gateways and more complicated sensors. A Red Hat Enterprise Linux makes a lot of sense.
We have a suite of middleware products that are lightweight and allow us to deploy into the field and into the cloud a consistent stack of products that allow you to do things like messaging‑oriented middleware to move data around. Our Fuse ESB, which allows you to transform and translate data and act on changing the data when you need to do that from legacy formats or to interfaces like REST.
We've got a business rules management platform. Our BRMS platform allows you to tag data that comes in and when it matches particular rules, go ahead and take action, whether that action's a notification or triggering a controller that turns on or off a switch.
We also have JBoss Data Grid. If you're having to do real‑time analysis, you can store that real‑time data in the in‑memory data grid, and then have the business rules monitoring that in‑memory data cache so that you get high‑performance, real‑time analysis of that data.
When you look, you have those same, consistent tools in the back office and in the cloud also paired with things like Data Virtualization to abstract the complexities of the data from your developers. Obviously, OpenShift and OpenStack for cloud management and PaaS. We have a broad range of products that lend themselves very well to tackling the types of problems that you run into.
Typically, these are the building blocks that either individual implementers or our partners are going to use to build their own products to do these sorts of things.
Gordon:  One of the things that strikes me as you were going through that is, again, I think people tend to think of the sensors, the thermostat or whatever, but, what you're really describing is very much a massive, distributed system.
James:  That's it exactly. We looked at the industrial Internet of Things and enterprise Internet of Things as having three tiers. You've got that sensor edge device that's was going to be a temperature sensor or is going to be a vibration sensor or whatever. You may have a gateway. That gateway, there's potential legacy sensors onto the Internet.
But, there's a central tier that resides in the field, in a yard, in an airport or in a substation that gathers that data, amalgamates it, does quick analysis for quick, tactical business rules management and complex event processing, and then sends summarized data back into the cloud for long‑term, deep, strategic analysis to find new rules.
That's the complex architecture. It takes complexity to each of those levels and broad system management, programming and capabilities to be able to meet these use cases. It happens not only at the sensor edge, but at that control tier and in the cloud. It takes all three to work.
Gordon:  Just one last topic area before we close. Security.
This is obviously a pretty hot topic in the Internet of Things. What are your views on architectures and approaches? There can be some pretty serious consequences if train signals are hacked, for example.
James:  There's several aspects that are important. One is that the heritage of embedded systems is that you build an embedded system and you buy enough for 15 years. You deploy them and you don't touch them again until they die.
That's going to be a thing of the past. All these are going to be network‑connected. In some form or fashion, the level of complexity is going to depend...
But, you're going to need to manage these systems for configuration management. You're going to need to have some form of AAA [authentication, authorization and accounting] on it. You're going to need to have patching. You're going to need to have all those sorts of things.
You're going to manage it sort of like you do a cloud. You're not going to do deep, heavy system management like you would on a database system, but you have to have some control over it. I expect to see tools for that sort of management evolve over the next couple of years.
The other side of it is that you need to look at encryption and certificate‑based authentication. Certificate‑based authentication works whether you're encrypting your data or not, because it allows you to authenticate that actor is who they say they are and that it's not somebody spoofing.
Then encryption. You've got to look at encryption at rest, whether you need to encrypt your data when it's sitting, whether that's in memory somewhere, or in a file system or wherever the data sits. Can you personally identify someone from that data or can that data be used to compromise your system?
If so, you need to, fundamentally, encrypt it and keep it encrypted until it needs to be used again versus "That's just aggregate data," or data that you can't trace back or use. In that case, you may want to just encrypt it in flight.
There is some data where you may not need to encrypt it at all, but you definitely need to use certificates to authenticate that the actors in that system are who they say they are.
Gordon:  Thank you for your time. Have any last words?
James:  Yeah, I would say please come and check out redhat.com/embedded. It's the beginnings of our embedded story there. We're going to be continuing over the next few weeks releasing additional white papers and information at that site.
Please catch me on Twitter. I'm @jkirklan and would love to have a conversation with anybody that's interested in the topic.
Gordon:  Great. Thanks very much, James.

James:  Thanks. Have a great day.

Wednesday, June 11, 2014

Podcast: Cloud management with Dell's James Urquhart

Management software changing! Users demanding! Containers coming!

I've known James for a number of years now and he always has a great deal of insight to share. This podcast from the Cloud Expo 2014 show floor (apologies for audio that isn't quite up to my usual standards) is no exception. James talks about how IT has to build for the users--not just themselves--and the stories of the moment. (It's probably no surprise that Docker gets a call out.)

Listen to MP3 (0:13:44)
Listen to OGG (0:13:44)

Friday, June 06, 2014

Links for 06-05-2014

Tuesday, June 03, 2014

Home automation meets the analog world

Apple homekit 310 236

I sort of hate to be the naysayer, which I seem to be being about a lot of futuristic things these days. But I’m having a lot of trouble with the whole SmartHome idea, Apple’s HomeKit entry notwithstanding.

I’m certainly not a gadgetphobe and I even still have some wireless X10 controlling some lights in rooms that were never completely rewired in my 1823 house. But it’s pretty hard for me to imagine what realistic relatively near-term benefits would lead me to any sort of wholesale upgrade of light switches and such. Heck, cool as the Nest looks, I can’t really justify replacing a perfectly functional programmable thermostat with one.

I suppose that really solid voice recognition and smart command processing for music, video, and communications systems could be interesting in a few years. (Though how long has it been since voice recognition has been on the cusp of good?) I wouldn’t mind telling my phone to turn on music to such-and-such playlist on the downstairs speakers only. But, as the hierarchy of my daily annoyances and chores goes, saving a minute to walk to the old iPhone that feeds my stereo and poke at it with my fingers a few times is pretty low on the list. 

And, indeed, anything that's primarily about getting home digital things to do stuff isn’t hugely interesting. Maybe that’s a failure of my imagination, but so it goes.

It’s not that I can’t imagine useful home automation if I give my imagination carte blanche to embrace the possibilities. Load the dishwasher, run it, and put away the dishes? Sign me up. Do my laundry and hang it up. Please. But Roombas notwithstanding (which I don’t think would work terribly well with my house layout), I don’t see any of this coming about anytime soon. And, arguably, even more modest advances will tend to run smack into life cycles for appliances and kitchens that tend to run into decades.

Automation can be extremely powerful in controlled environments with well-defined tasks and constraints. My messy analog home? A lot less so.

Monday, June 02, 2014

Links for 06-02-2014

Books remain long


From Martin Weller:
Now ask yourself, how many academic books (or even fiction) have you read that were really a 40K word idea stretched out over twice that length? Me, I'd say nearly all of them. This is a classic example of old conventions dictating the possibilities of the new. My book will be available freely under a CC licence as an epub and PDF version. There will be a physical copy available at a reasonable price, so the need to make the book 80K words in length diminishes. I had made the case I wanted to make, explored it in depth, and kept it reasonably concise. People might even read it.
This isn’t a new thought. Back in 2009, Philip Greenspun wrote: "Suppose that an idea merited 20 pages, no more and no less? A handful of long-copy magazines, such as the old New Yorker would print 20-page essays, but an author who wished his or her work to be distributed would generally be forced to cut it down to a meaningless 5-page magazine piece or add 180 pages of filler until it reached the minimum size to fit into the book distribution system. "
That said, Kindle Singles and long blog posts notwithstanding, I’m not sure that mainstream publishing has changed all that much. The gravitas that a book brings still requires a certain thunk factor as we used to say when writing reports when I was in the industry analyst biz.

Thursday, May 29, 2014

Podcast: Security, privacy, and home security with Gordon and Ellen


Red Hat's Gordon Haff and Elen Newlands talk security and privacy from the MIT Sloan CIO Symposium, the implications of privacy for IoT, whether Google could get into the home security business, and the mess that is security standards in cloud and elsewhere.

Technology and Culture at the MIT Sloan CIO Symposium 2014
Google and Nest may move into home security by buying out Dropcam

Listen to MP3 (0:30:11)
Listen to OGG (0:30:11)

Links for 05-29-2014

Wednesday, May 28, 2014

Yes, automation needs to be autonomous

Googleselfdriving

From John Markoff at The New York Times:

For the past four years, Google has been working on self-driving cars with a mechanism to return control of the steering wheel to the driver in case of emergency. But Google’s brightest minds now say they can’t make that handoff work anytime soon.

Their answer? Take the driver completely out of the driving.


I really want to give Google the benefit of the doubt here and assume that their engineers are smart enough not to have thought it was realistic for this sort of automated system to have a realtime manual backup. As I discussed a couple weeks back, "the handoff between manual (even if assisted) and autonomous needs to be clearly defined. Once you hand off control, you had better trust the autonomous system to do the right thing (within whatever margin of error you deem acceptable). You can’t wrest back control on the fly; it’s probably too late."

Tuesday, May 27, 2014

Links for 05-27-2014

Thursday, May 22, 2014

Technology and culture at the MIT Sloan CIO Symposium 2014

Sandy Pentland, MIT Media Lab

I learned a new buzzword at yesterday’s MIT Sloan CIO Symposium: "The Fog”—sort of Cloud + Internet of Things. Mercifully, that notwithstanding, the event was per usual an in-depth snapshot of not only up-and-coming technology trends (as one would expect at MIT) but also many of the related cultural and organizational issues. You can think of the event as being about the technological possibilities—but also about the constraints on those possibilities imposed by culture and other factors. 

The MIT Academic Panel is a good jumping off point. Moderated by Erik Brynjolfsson (co-author with Andrew McAfee of The Second Machine Age), it examined the idea that we are “now beginning to have technologies that augment the control system” (i.e. the human brain) in addition to the "physical power system" (i.e. human muscles). Brynjolfsson went on to state that “We are at the cusp on a 10 year period where we go from machines not really understanding us to being able to."

One example discussed by the panel was self-driving cars. John Leonard from MIT CSAIL and the Department of Mechanical Engineering said that he was “amazed by the progress of what’s happening out there,” likening autonomous driving systems to search for the physical world. At the same time—and here’s where the constraints come in—he also said that he had the “sense that we’re not quite there yet,” for example, to determine what might happen in a tricky driving situation. What’s “not quite there”? No real predictions. Leonard did say however that he only saw a 1 in 10 chance of a "really big [employment] transformation” which I took to mean a 1 in 10 chance of a what I like to call a robo-Uber (i.e. truly autonomous cars) in any near-term time horizon. Sloan prof Thomas Malone added that he would “be surprised to see general intelligence computers relative to people” in 30 to 40 years. 

In other words, strong AI—as opposed to things like IBM Watson that just appear intelligent—remains elusive. And it’s also unclear what limits that constraint puts in place.

The MIT Media Lab’s Sandy Pentland—decked out in vintage wearables—offered some other potential limits when he noted that the “rate of innovation in technology is much greater than the rate of change in government is much greater than the rate of change in culture. The NSA was a pretty well-governed organization—for the technology of the 1960s.” But, now, he went on to say “Everything is becoming data-fied.” And, while there’s always been a lot of slop in laws and how they’re enforced, that becomes more difficult when there’s potential telemetry and data everywhere. Automatic traffic tickets anyone?

As for passwords? They’re “useless” says Patrick Gilmore of the Markley Group. “If you’re not already using 2-factor authentication, you’re behind.” Nor was he a fan of password managers. Mind you, this is a somewhat enterprise-centric view of security. Tim Bray has argued for federated identity in a broader context. Which requires trusting someone and people generally aren’t very trusting these days. But it’s probably better than the password status quo in a lot of situations. Risk management and security—and their intersection with ever-increasing quantities of data—were also big topics throughout the day. Forrester Research’s Peter Burris, moderating a Leading the Digital Enterprise panel, opined that instead of saying we can protect everything we have, we have to think about what we can do about it afterwards—in addition to continue trying to stop attacks. Equinix’s Brian Lillie agreed, saying “You’re not going to stop everything; it’s a cornerstone of risk management.” And Raytheon’s Rebecca Rhoads spoke about the need to have sophisticated compartmentalization of information, driven by regulations and other factors.

Gilmore also suggested that people coming to his company—Markley’s a colocation provider—“mostly aren’t asking the right questions.” When dealing with cloud and other infrastructure providers, he argued that you should be looking in more depth than most people do. How long do you keep backups? How many versions? What type of physical security do you have? Do you degauss your hard drives when you retire them?

Mark Morrison of State Street also noted that you can’t outsource all of your security and have to think about how all of your security fits together—including all your point security products, your operational processes, and your external providers—and constantly evaluate. He also noted that there’s a “conundrum between privacy and information security—the level of monitoring and sophistication that lets you institute countermeasures."

Patrick Gilmore, Markley Group

Security and privacy aren’t the only things that play into data though. There’s also the pesky matter of physics. Lillie discussed hybrid cloud models in this context because “if you have enormous data sets, data gravity is happening. You need to find ways to connect clouds to private enterprises."

If I had to sum up my main takeaways from the day, they’d be something like the following. There’s the potential for many big changes related to computing power, to data, to computing ubiquity. We’re already starting to see some of the results. But some technological distances that seem small aren’t. (Think reliable speech recognition.) And, even more importantly, culture, laws, ethics,  and economics all matter. Which is one reasons that CIOs increasingly have to work closely with business owners to deliver on technology promises rather than focusing on the technology alone.

Wednesday, May 14, 2014

Links for 05-14-2014

Friday, May 09, 2014

Links for 05-08-2014

Wednesday, May 07, 2014

Links for 05-07-2014

Tuesday, May 06, 2014

Links for 05-06-2014

Smart crowds, Irrational individuals?

This is from a presentation/discussion from Boston ProductCamp in May 2014. Here's the abstract: We've all made rational decisions and forecasts based on individually analyzing the best available data. But there are many other aspects of decision making. This session will examine some of those. When can groups of non-expert individuals beat some of the best experts? What are some of the common biases that cause ordinary people to make decisions differently from those that they "should" make. Can you take advantage of the ways other makes decisions or is this unwarranted manipulation?

Monday, May 05, 2014

Automation and autonomy

Bmw spartanburg plant 12

I’ve been thinking and reading about autonomous systems of late—both autonomous IT systems and autonomous systems of other types such as vehicles. I also read a lot of misconceptions about automation—whether it’s in the arguments against or in misunderstanding what automation really means. I’ll be writing further on the topic but here are five points to get started. Comments welcome.

Computers are good at things that can be automated

Back in my earlier life at Data General, we were selling some of the earlier symmetrical multiprocessor (SMP) servers to large enterprises, including Wall Street. SMP introduced a new wrinkle. Where to place individual processes so that the system as a whole, with its multiple processors, ran most efficiently. One approach was to manually place them—which is precisely what a number of our big customers wanted to do; we even wrote and sold them class software to help them do so. But know what? The operating system scheduler could actually do this job pretty well in the aggregate, as all these customers eventually recognized.

There are legitimate questions about what tasks can be readily handled by computers and which can’t. With respect to self-driving cars specifically, computer AI interacts with the physical world much differently from a human. It’s fair to say that computers will be able to do many things much better than can even a good driver while handling other situations will prove very difficult to solve. With datacenter computing though, it’s clear than many tasks have to be eventually automated and exceptions should be relatively rare.

Assistance can precede automation

Yet, even when complete automation isn’t (yet) achievable, it can still be used to significantly offload how many activitie people need to do. We’re already seeing this in automobiles with technologies like adaptive cruise control, which can adjust a car’s speed to maintain a safe distance from any vehicles ahead. Such systems are mostly in luxury cars today but I expect they’ll become both more widespread and more sophisticated. And judiciously applied assistive systems can be rolled out far more incrementally than anything taking over full control.

The same is true with cloud computing. One example that I like to use is around the idea of cloudbursting—typically used to mean the dynamic movement of workloads from private to public clouds in response to an increase in demand. As I’ve written previously, this strong form of cloudbursting—much less the idea of workload movement in response to changes in public cloud spot pricing—gets into a lot of complications. However, hybrid cloud management software and operating systems that can run in different environments make it possible to move applications around as needed (e.g. to switch cloud vendors) even if the process isn’t necessarily completely autonomous and hands-off. 

Automation isn’t all or nothing

Even when hands-off automation works well and is appropriate for some tasks, it may not be used—or may be used under a more rigorous set of controls—elsewhere. With respect to self-driving cars, I can easily imagine an interim stage where they can drive autonomously on designated sections of limited access highways—and not elsewhere. For anyone who commutes on the highway or does long Interstate drives, this should be an obvious win even if its not the nirvana of a robo-Uber.

Similarly, while “automate more” should be IT’s mantra, most companies aren’t starting from scratch. It won’t always make as much sense to aggressively automate stable legacy systems as it will to automate through a new OpenStack infrastructure that’s running primarily new cloud-enabled workloads. Standardizing and automating are effective at cutting costs and reducing errors just about everywhere—but the bang for the buck will be bigger in some places than others.  

But autonomy requires a defined control handoff

The above said, the handoff between manual (even if assisted) and autonomous needs to be clearly defined. Once you hand off control, you had better trust the autonomous system to do the right thing (within whatever margin of error you deem acceptable). You can’t wrest back control on the fly; it’s probably too late.

In so many autonomous car discussions, I hear statements to the effect of: “If there’s an emergency, the driver can just take over.” Well, actually he can’t. He’s playing a game on his iPad and he probably needs a good 30 seconds to evaluate the situation and take any corrective action. OK for some situations, not for others. If the car’s in control, it has to deal with things itself—at least anything urgent.

With complex distributed IT systems, as increasingly characterize cloud environments, it’s certainly important to understand what’s going on. But events happen and cascade at incredibly short time scales by human standards. Check out this presentation by Adrian Cockroft of Battery Ventures in which he talks about some of the challenges associated with monitoring of large-scale architectures.   

Autonomy can require new approaches/workflows

Finally, the best way to automate is likely not to just automate the old thing, certainly not if the old thing is a mess. A clean sheet approach may be constrained by coexisting with what’s already in place to be sure. The infrastructure that we’d build for 100% self-driving cars is much different than what we would build (and have built) for a 100% human one. However, even given a mixed environment, I suspect that over time we’ll add some infrastructure to help autonomous cars do things that they’d have trouble doing otherwise. 

In the case of IT, we’re seeing new classes of tools oriented to large-scale cloud workloads and DevOps processes. One big thing about these tools from those of the past is that they’re mostly open source. Donnie Berkholz of RedMonk discusses some of them in OpenDevOps: Transparency and open source in the modern era. These include configuration management like Puppet and Chef as well as monitoring and analysis tools like Nagios and Splunk. DevOps itself, whatever your precise definition, is very much tied into the idea that much of the manual, routine ops work of the traditional system admin is increasingly automated. This is the only thing enabling a developer to take over so many ops tasks.  

Automation done right is a huge positive. But we need to understand what it is, how to use it, and how to interact with it. 

[Photo credit: BMW. BMW Spartansburg SC assembly plant.]

 

 

Links for 05-05-2014

Thursday, May 01, 2014

Podcast: Autonomous vehicles, passwords, and IoT with Gordon and Ellen


In the first episode of a new Cloudy Chat feature, I sit down for a free-wheeling discussion with one of my Red Hat colleagues. Today, my co-host is Ellen Newlands who is the product manager for identity management at Red Hat. We start with self-driving cars and other autonomous vehicles and move onto the Internet-of-Things against a background of security and privacy implications in all of this.

A few links to go with the podcast:

Google self-driving cars
FreeOTP
Federated Identity, Tim Bray
McKinsey article on the Internet of Things

Listen to MP3 (0:31:38)
Listen to OGG (0:31:38)

Monday, April 28, 2014

Links for 04-28-2014

Thursday, April 24, 2014

Links for 04-24-2014