Wednesday, May 27, 2015

Links for 05-27-2015

Thursday, May 21, 2015

MIT Sloan CIO Symposium 2015: Dealing with Disruption

Peter Weill, MIT Sloan CISR

Board members estimate that 32 percent of their companies' revenues are under threat from digital disruption in next 5 years. This was one of the findings from a November 2014 MIT CISR study that Peter Weill shared to kickoff the MIT Sloan CIO Symposium 2015 yesterday. The theme of the day was "Inventing Your Future: Accelerating Success Through Technology.” The annual event always explores the intersections of business and technology in interesting ways and this year was no exception. I’ll cover a few of the ones that especially caught my eye here.

Derek Roos, CEO Mendix

The first was the topic of digital disruption. This is happening because, in the works of Mendix CEO Derek Roos, "Every company is now a software company.  Every employee is in IT.  The CIO is a business leader.” Jennifer Banner, the CEO of Schaad echoed that "10 years ago I’m not sure I would have known the CIO if I had ridden the elevator with him.  The CIO was not really in strategy. Now [the CIO is] absolutely critical in helping the board move into [digital] strategy."

At the same time, Christopher Perretta, CIO State Street notes that "risk has changed. Risk excellence is top of mind. If I blow the risk , that's it."

These twin and sometimes opposing needs to be nimble enough to handle digital disruption while simultaneously dealing with risk led to a discussion of two-speed (a.k.a. bimodal) IT. As Roos explained:

What's critical is you can't just take existing IT and decide to go fast. You may be able to incrementally improve efficiency and speed. But you have to do things differently. Large insurance company. We created a fast-track IT organization and put a cross-functional team together. Very nimble. Were able to take introducing new products from 18 months to 3 weeks. Think like a startup. Accept that they may fail. Fail but fail fast.

Ultimately, Roos envisions that the distinction between IT and not-IT will blur however. "Eventually the organizational structure of IT has to change. 100 years ago all the typing was done in a typing pool."

As a side note, there’s been a lot of discussion of late in “cloud circles” about the bimodal IT concept (at my employer Red Hat as elsewhere). It’s not without its detractors but, properly understood as referring to a modernizing classic IT plus a strategic initiative based on cloud platforms, DevOps, and new-style applications, it makes a lot of sense. That it made such a prominent appearance at a relatively business-oriented event such as this helps substantiate its usefulness as an organizing principle.

Daniela Rus, Director, MIT CSAIL

The Academic Keynote Panel dealt with the impact of automation on all this. Which tasks can easily be automated away? The key question is how repetitive and long-lived the tasks are. Prof. Daniela Rus, Director MIT CSAIL noted that the "car industry automates 80 percent of tasks because they can take advantage of repetitive tasks. But cell phones and electronics are generally only about 10 percent automated. If product is going to change every three months, you don't have time to retool and reconfigure the factory. Robots today are a bit like programming before we had compilers."

In general, tasks related to iterative software testing and deployment, as in DevOps, probably have fewer limitations than do tasks in the physical domain. Nonetheless, it’s worth remembering that it’s important that workflows have to be understood and repeatable in order to enjoy the benefits of automation. 

On the same panel, Prof. Mary “Missy” Cummings, Director of Duke’s Humans and Autonomy Lab also cautioned that heavily automated systems can be a problem when humans need to take over control. A former US Navy fighter pilot, Cummings said that "Commercial pilots touch the stick for 3 or 7 minutes. Mostly on takeoff because planes aren't rated for automated takeoff. That's on a tough day. How much automation is in there? How much should be in there? Boredom is setting in. Humans don't handle that well." 

 

Friday, May 15, 2015

Links for 05-15-2015

Podcast: Red Hat's John Mark Walker talks Project Atomic and the Nulecule spec


John Mark Walker is the Open Source Ecosystems Manager at Red Hat. In this podcast, he discusses Project Atomic, a lightweight Linux specifically set up to provide a containerized environment for services/applications. We also cover the new Nulecule spec. (Hat tip to the Simpsons!)

From the announcement:

Nulecule defines a pattern and model for packaging complex multi-container applications, referencing all their dependencies, including orchestration metadata, in a single container image for building, deploying, monitoring, and active management. Just create a container with a Nulecule file and the app will “just work.” In the Nulecule spec, you define orchestration providers, container locations and configuration parameters in a graph, and the Atomic App implementation will piece them together for you with the help of Kubernetes and Docker. The Nulecule specification supports aggregation of multiple composite applications, and it’s also container and orchestration agnostic, enabling the use of any container and orchestration engine.
Blog post on the Nulecule announcement
Demo video

Community resources:

IRC: #nulecule on freenode
GitHub repository for Nulecule: github.com/projectatomic/nulecule
GitHub repository for Atomic App: github.com/projectatomic/atomicapp
Container-tools mailing list: www.redhat.com/mailman/listinfo/container-tools
Docker Hub namespace: registry.hub.docker.com/u/projectatomic/atomicapp/

Listen to MP3 (0:14:20)
Listen to OGG (0:14:20)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff with Red Hat with another edition of the Cloudy Chat Podcast. I'm here with my friend John Mark Walker, who is the Open Source Ecosystem Manager in the Open Source and Standards team at Red Hat. Today, we're going to be talking about Atomic.
John Mark Walker:  Atomic. Cloudy Chat Chat, this is John Mark. Thank you, Gordon, for having me. As he said, I am the Open Source Ecosystem Manager for the Open Source and Standards team. What is an open source ecosystem manager? Well, I'm glad you asked.
It is somebody who goes out and works with other organizations, companies, developers that are working with our open source software communities creating ways that we can be more efficient, work with them more collaboratively in better ways, and hopefully be more responsive to their needs when it comes to creating open source software that's usable and widely propagated around the world.
Gordon Haff:  We've been talking about Atomic in some of the past podcasts. I think in the past, we've been focusing on it as a containerized operating system, a lightweight containerized operating system. As this space evolves, it's really becoming about more than that. John Mark, explain the direction that things have headed in.
John Mark Walker:  Sure, we started off talking about Atomic Host about a year ago. The reason that we focused on that was because that was the part that we were delivering first. We were delivering a stripped down operating system for customers that would be based on RHEL. On the upstream communities, we're really talking about Fedora and CentOS based images, stripped down images tailor-made for hosting containers.
That's really what Atomic Host is all about. We introduced tools, like rpm‑ostree, to let you do Atomic updates to the container host. We're now adding more to the pot. In addition to the Atomic Host piece, we're now starting to talk more broadly about container‑based applications. It's not just about the stripped down operating system at the core. Although, that is certainly an important piece of it.
It's certainly, given Red Hat's history, that's right in our wheel house. But now, we're starting to evolve around that and add to it to the different layers of, what's called, the Atomic platform or the Atomic app platform. What it is it's a system, or a set of tools that helps developers and implementers to compose container‑based applications, as well as deploy container‑based applications.
You'll notice I'm choosing my words carefully here around container‑based applications. What is that exactly? What we're trying to figure out is, "How do you make it easy to compose applications and services that scale‑out on an as needed basis that give the creators the ultimate flexibility in defining how an application's architecture works together, how it interacts with the orchestration tools and libraries around it?"
In our case, we're definitely investing heavily in Kubernetes. That's our orchestrator of choice. How does all that stuff work together? How do we define an application such that you can simply have a definition file for the app and, then once you launch the application, have it pull down resources as needed working with the orchestration components to define policies around an app or a service?
That's the space that we're looking at right now. That's where we're going with, and we've just published this in the last couple of weeks, it's called the nulecule specification. It's our attempt at defining not only the problem space around the allowed container‑based applications, but how do you actually define them and then come up with tooling that helps developers and implementers work with them?
We have the spec. It's in the Project Atomic Organization on GitHub. You go to github.com/projectatomic. You'll see the nulecule repository there. You'll see some example code there.
We looked at a lot of things. We looked at TOSCA. We looked at other attempts to define the same problem space. We took the best of, I hate to use the term "best of breed," but [laughs] essentially that's what it is. Then, now, our focus is around creating example applications that utilize the spec and transform it into an actual working application.
So far, our focus is on things like WordPress. WordPress seems to be the Hello World of cloud applications. That's the one we're starting with. It has well‑defined components, so it's relatively easy to work with.
But we're also looking at a couple of Mongo‑based apps. We're really looking at using this spec to work with all of our upstream projects that we support and sponsor from ManageIQ over to Gluster, Ceph and everything. We want to make sure all these can be "containerized" and can actually work in a way that's deployable and usable by normal people.
Gordon Haff:  What were some of the guiding principles behind that? You said, "We looked at some different possibilities out there." Why this particular spec?
John Mark Walker:  That's a great question because one of the first things that people ask us is, "Well, why not use tool X?" Where tool X can be OpenShift or some other scale‑out application platform, application developer platform or application deployment platform.
The reason we chose the attributes of this particular spec is because we want it to be platform neutral. Red Hat, we've pretty much settled on a stack. We support Docker. We work very heavily with Kubernetes. We collaborate very closely with that team.
Those are technology choices we've made. But we don't want the spec to be specific to the implementation. We want there to be room for other platforms to implement the spec according to their needs. If you're, say, an OpenShift developer, you should be able to use the spec and plug it into the applications that you're creating on OpenShift.
If you're a developer working on another platform, we're hoping that support for that platform will be added to the tooling so that you can run the spec whenever a spec is added to a particular software project. It's really around platform neutrality, but also the concept around the layered graph. The graph where the graph is all the components and services that make up a composite app.
You want to define the graph, and not only define the graph, but be able to pull in the layers on an as needed basis. If the layers don't exist, to be able to compose them on the spot according to the policies that you've defined in your spec file. To us, these are some of the most important components of the spec.
There's also the scale‑out nature that we want to support. But, frankly, who doesn't want to support scale‑out architectures these days? It's the cost of doing business in the cloud. But those first two that I identified, those are the central questions we wanted to answer when it came to defining what the spec is.
Gordon Haff:  Who else has been involved in the creation of this spec?
John Mark Walker:  Right now, it's us, Red Hat, meaning those are working very closely with the Atomic Project. We also have a team coming in next week from a certain Silicon Valley IT tier one company, who shall be named later, hopefully. [laughs] But we're very quickly looking to add co‑collaborators on this spec.
For one thing, we want to make sure that there is enough of a community out there that agrees on the problem space, and that once we agree on the problem space that they agree that this spec is the right approach to solving that problem.
Gordon Haff:  What's the relationship here with projects like Docker, Rocket, and some of the other things going on? You mentioned Kubernetes specifically...
John Mark Walker:  Sure.
Gordon Haff:  ...and some of the other things you said.
John Mark Walker:  Like I said, at least for now, we have settled on an implementation of this spec. But if you're partial to, say, Rocket, as a container implementation, we want to be able to support that, as well. We're talking to a number of other communities and projects that work and produce container tooling to generate more support for the spec.
That certainly includes the Docker community, the Rocket community, Kubernetes as mentioned, the OpenShift community. Any other community that wants to is focusing on containers at the moment, we want to work with you.
Gordon Haff:  What's coming down the road?
John Mark Walker:  What's coming down the road is that we're going to be pushing more and more example applications into the repositories under Project Atomic on GitHub. We can go forward with more and more examples.
We've also got this really great continuous integration build system that we're working on, so that you, Mr. Developer or Mr. Software Implementer and Deployer, you'll be able to stick a spec file, Atomic spec file or nulecule spec file into your repository, push into our build system, and either generate feedback on why it failed, or be able to produce right away consumed containers that you can stick on whatever registry you want to post it to.
That's going to come up very soon, as soon as we have finalized our example repository so that we have a very good definition around how to create these containers. That way developers will be able to test their own container builds and make sure that they work, and then post them onto various registries, for example like Docker Hub, and post them for general consumption.
Gordon Haff:  If somebody wants to get involved with this, either using it or developing for it or whatever, how do they do that?
John Mark Walker:  Well, the first place to go is projectatomic.io. That's the home of the Atomic Project. If you want to get down and dirty with the code, you can go to the GitHub Organization, github.com/projectatomic, you'll see both the example applications, nulecule specification, as well as the regular stuff that you've even grown accustomed to seeing around Project Atomic.
It should all be there. Look for more and more documentation that we're producing around the application definition and our solution to what the container application problem is.
Gordon Haff:  Thank you.
John Mark Walker:  Wait. Wait. You left out the most important part. Why are we calling this nulecule?
Gordon Haff:  [laughs] Why are we calling it?
John Mark Walker:  Well, Gordon, I'm so glad you asked that question. If you search for nulecule, you'll notice that it refers to a particular episode of "The Simpsons" where Homer is trying to say the word "molecule" and ends up...It's a malapropism from "The Simpsons."
The reason we chose that is, if you look at the Atomic Project, Atomic and if you look at that analogy where a container is an atom, well, what's a congregation of atoms, it's a molecule. We thought it would be really funny if instead of using the word molecule, we used the term "nulecule" because it seems related [laughs] in some way.
Gordon Haff:  This gets to really the heart, as well, of the topic that was covered in a number of past podcasts and the way we're moving towards applications in this containerized world is really is a set of relatively lightweight services that are basically talking to each as services and [inaudible 12:25] the definition rather than this being one big monolithic application.
John Mark Walker:  That's precisely it. So that we're buzzword compliant, that means microservices and Internet things. Internet things and microservices, I have to mention those two things, because I want there to be enough Web traffic around this podcast. The old monolithic ginormous application is going away.
When you think of microservices, it's really a continuation of the old UNIX model. The old UNIX model was small applications that do one thing very well working in concert together. Because in the old days, these were command binding tools that worked together to produce an application on a UNIX server.
Well, now, each of these little applications are running on their own container or pod space, their own name space defined by a pod, working in concert together to produce these services that are based on cloud. It's a continuation evolution of the old UNIX model. I think that's an area that most Linux developers and admin should be very comfortable with.
Gordon Haff:  Great. Thanks, John Mark. Anything else you'd like to add?
John Mark Walker:  Just that I started working in this area about a month ago, and it is a very, very exciting area to be.

Gordon Haff:  Well, thanks.

Monday, May 11, 2015

Podcast: OPNFV with Red Hat's Dave Neary

Dave Neary is the SDN and NFV community strategy head at Red Hat. In this podcast we talk Network Function Virtualization, who is using NFV, and the open source OPNFV project which will be coming out with its 1.0 release shortly.

Listen to MP3 (0:10:18)
Listen to OGG (0:10:18)

[Transcript]

Gordon Haff:  Hi, everyone. This is Gordon Haff of Red Hat, and welcome to another edition of the Cloudy Chat podcast. Today I'm joined by Dave Neary, who is the SDN and NFV community strategy head. That has something to do with communications, networking. There's lots of acronyms. Dave, introduce yourself, please.
Dave Neary:  As you said, hi, Gordon. Thank you for the introduction. As you said, I'm working on the Open Source and Standards team on SDN and NFV. In that capacity, I've been working particularly closely with OpenDaylight and OPNFV.
Gordon:  We've covered this in prior podcasts, but it's been a little while, and I think this is a new area, still, for a lot of folks. Could you please explain, in reasonably simple English, what is NFV? What does it stand for? Why should you care?
Dave:  NFV stands for network functions virtualization, or network functions virtualized. It's essentially converting hardware that's been used in the telecommunications industry to provide features, telecommunications applications, to provide those in virtual machines instead of physical hardware.
Gordon:  Who would use this kind of thing?
Dave:  Typically, users are communications service providers, CSPs, to add yet another acronym. Your local telco would be using this to power voice, remote access for things like broadband, radio access. Those kinds of things are network functions. In the enterprise, in the data center, other things, like gateways, firewalls, intrusion detection systems can be considered network functions as well. They're applications that sit on your network and through which network traffic flows on its way to its destination.
Gordon:  What's the advantage of doing things this way rather than buying dedicated hardware, which I guess was the historical way you got these things?
Dave:  The first advantage is flexibility, agility, the ability to deploy new applications faster and cheaper than when you're doing hardware‑based deployment. Second advantage is the ability to optimize capex--capital expenditure--and operational expenditure by moving to virtual machines instead of physical machines. By virtualizing both your compute and your network, you make it easier to scale the number of applications that a single system administrator can manage, and you also make it easier to do capacity planning, to scale out your infrastructure over time, because you're no longer tied to physical infrastructure for your core network functions.
Gordon:  Dave, you work on our Open Source and Standards group. As we said at the beginning, you're involved with our community strategy. Tell us a little bit about OPNFV.
Dave:  OPNFV stands for the Open Platform for NFV. It's a project whose goal is to create a production‑style reference platform for network function virtualization with entirely open‑source components. It doesn't make a choice about which open‑source components to use there, except there are a certain number of things that are a given. You're going to need network virtualization, so a software‑defined networking controller. You're going to need infrastructure as a service. The project right now has converged on OpenStack as the infrastructure as a service for private cloud, and OpenDaylight, to a large extent, for the software‑defined networking piece. There's also a project in OPNFV using OpenContrail.
I would say that right now the project has four pieces. The first and the most important is the provision of hardware, on which you can deploy these open‑source components. The second is the deployment tools and integration ‑‑ the glue code, if you like ‑‑ that allows you to deploy all of these pieces together as a reference platform. The third piece is a set of requirements projects, which are going to define what are the needs of the telcos that want to use this NFV platform and how do we fill the gaps in the platform upstream to respond to those requirements. Then the fourth component is testing and performance, so actually testing network‑function workloads on top of this platform to identify gaps and to verify when the changes are accepted upstream, to verify that the platform is meeting the requirements and the needs of telcos.
Gordon:  We're around the time of the first release of OPNFV ‑‑ depending on exactly when we release this podcast. Could you tell our listeners about what's coming in that first release?
Dave:  The first release is really setting the foundational pieces, the building blocks, for being able to drive change. One of the things about OPNFV which is a very important value is that we want to be upstream first. What that means is that we're aware that an NFV platform is made up of multiple pieces, not just OpenStack and OpenDaylight, the two that I've mentioned already, but also data plane acceleration is very important, so projects like DPDK or the OpenDataPlane project are very important. KVM, the Linux kernel, Libvirt, are all pieces of the platform as well.
Optimizing that entire platform for NFV involves tweaking a lot of moving parts. To get those changes upstream, we first need to have a baseline. This first release is the baseline. We have a set of labs on which we can deploy the reference platform. We have two reference deployment stacks that are being included in the release. One is based on the Foreman, and the other is based on Fuel, which is an OpenStack installer, and we have a set of requirements projects that have been created over the course of the first release cycle.
This release is really where we agree on how we work together, and we create the base platform on which we can now start really driving change upstream through those requirements projects and through the testing and performance projects that are getting started.
Gordon:  Once we have this baseline, where do you see things going from there?
Dave:  We've already started to think about the next release cycle of OpenStack. We're working very hard in OPNFV to make sure that when we identify feature gaps in OpenStack, that those feature gaps are documented in a way that will be acceptable to the upstream OpenStack project. We're following the OpenStack specs process. That process is open now for the Liberty release, for Nova and Neutron, which are two very important projects in OpenStack for the NFV use case.
We're working on identifying features that will be implemented in the very next release of OpenStack. We're also identifying other features around, as I said, data plane acceleration, which is a big area, and figuring out how we move to the bleeding edge of those projects. Right now we're using the latest stable release of OpenStack and the latest stable release of OpenDaylight. As we start to get changes upstream, we will want to be able to integrate those as soon as they're integrated upstream so that we can test on the latest version. That's really what's coming, I think, in the next release cycle.
Gordon:  Who's all involved with this effort?
Dave:  A lot of people across the industry. I've mentioned telcos. The network equipment providers, these are the people who traditionally would've been vendors of those hardware solutions I mentioned earlier. Almost all of the network equipment providers are involved in this project. That's companies like Ericsson, Nokia, and Alcatel‑Lucent, who recently merged, Huawei, Samsung, NEC, Cisco. Then we have a large number of software platform vendors, people who are selling OpenStack solutions, and hardware vendors, people who provide the underlying data‑center hardware that you would have in a traditional data center.
Then there are a number of ISVs, independent software vendors, who are specializing either in open‑source virtual network functions or in components of the platform, things like deep packet inspection, the ability to look into packets as they're arriving in the network and identify what type of content is in them and tag them appropriately, or data plane acceleration, which I mentioned, which is the ability to offload data plane processing from the kernel into user space, which is a big accelerator of performance in the NFV world.
Gordon:  I think that gives some indication of how much interest there is out there that we have NFV World Congress coming up.
Dave:  Yes. This is a new event. It's a sister event to the SDN OpenFlow World Congress, which is an annual event that's been going on for some time. Really, it's one of the events that is going to be important in the NFV world. It's the first time it's being run, so we're looking forward to seeing how it plays out.
Gordon:  I also saw a fair amount of NFV presence when I was at the Open Compute Summit a few months ago.
What do people do if they want to learn more?
Dave:  The canonical source is www.opnfv.org. That's the website describing the project, its governance structure and so on. All of the work that the community is doing is in wiki.opnfv.org, and that is a very rich and very fast‑moving source of information on what's happening day‑to‑day in the OPNFV project.
Gordon:  Great. I'll be putting some more information, links, and so forth in the show notes for this podcast. Dave, anything you'd like to close with?
Dave:  This first release, OPNFV Arno ‑‑ we're naming our releases after rivers ‑‑ is the first, I hope, of many. We're looking forward to getting some changes into projects like OpenStack, KVM, and OpenDaylight very soon, to really turn this into the production‑style NFV platform that we hope it will become.

Gordon:  Thank you, Dave.