Tuesday, February 21, 2017

Podcast: Cloud Native Computing Foundation with Dan Kohn

Dan Kohn is the Executive Director of the Cloud Native Computing Foundation. In this podcast, he discusses the goals of the CNCF and the reason why Kubernetes is under the CNCF's umbrella--plus his take on serverless computing. In addition to Kubernetes, the CNCF also hosts Prometheus, Fluentd, OpenTracing, and Linkerd.

In the links below, check out the cloud-native landscape in particular, which catalogs the broad set of projects playing within this technology area. As Dan puts it: "Kubernetes is the cornerstone of a containerization and orchestration solution but is not a complete solution."

Link to MP3 (0:22:25)
Link to OGG (0:22:25)

Podcast: Blockchain and Hyperledger with Brian Behlendorf

I sat down with Brian Behlendorf, the executive director of the Hyperledger project, while I was out at Lake Tahoe for the Open Source Leadership Summit last week. The Hyperledger project is an open source collaborative effort created to advance cross-industry blockchain technologies. It works as essentially an umbrella project on top of projects such as Fabric and Sawtooth Lake.

Brian was a primary developer of the Apache Web Server and a founding member of the Apache Software Foundation. In our conversation, he covered topics such as balancing innovation and standardization, the use of blockchains for transactions within consortiums of competing organizations, technical challenges, and early examples of blockchains use that go beyond cryptocurrencies.
Link to MP3 (0:14:32)
Link to OGG (0:14:32)

Thursday, January 19, 2017

Podcast: Why sysadmins hate containers with Mark Lamourine

Two hats

In this podcast, Red Hat's Mark Lamourine brings the perspective of a former sysadmin to explain why containers can seem like a lot of work and potential risk without corresponding benefit. We also discuss the OpenShift Container Platform as well as a couple of Fedora projects aimed at removing barriers to sysadmin container adoption.

Show notes:


Link to MP3 (0:29:20)

Link to OGG (0:29:20)


Gordon Haff:  Hi, everyone. Welcome to another edition of the "Cloudy Chat" podcast. I have my former partner in crime, Mark Lamourine, back with me today. He's been off doing some other things.

Mark came to me with a fantastic title for a podcast, and I knew I just had to sit down with him, "Why Do Sysadmins Hate Containers?"

Mark, you've been a sysadmin.

Mark Lamourine:  That's my background. My background is a system administrator. I have computer science degree, but I've spent most of my time either as a system administrator or as a developer advocating for system administrators who have to manage the software that people I'm working with produce.

Gordon:  I go to an event, like Amazon re:Invent, for example, and there are a lot of sysadmins there, maybe a little more new‑age system admins, DevOps, whatever the popular term is this week. They seem to love containers. Where do you make that statement from?

Mark:  There's actually two. What brought this up to me was I was at the LISA Conference, LISA16, in Boston this fall. I noticed that there were only a couple of talks, one tutorial, and a couple of books on containers. There was a lot of the other traditional sysadmin things. There's new tools. There's people learning different areas.

I was there because I assumed that sysadmins were going to still think that containers are growing and that this would be a big thing coming. There was some of that, but I got an awful lot of, "Yeah, we don't do containers. We tried containers. It didn't work. That's old hat." There were a whole bunch of things which ranged from disinterest to disdain for containers among that group of people.

The difference between that group and a group at re:Invent is that re:Invent is specifically aimed at the technology. It's aimed at that company. It's aimed at Amazon. All the people who come are self‑selecting interested in cloud, in Amazon, in their products, and in their tools.

At LISA, the self‑selection is I am a professional system administrator without regard to the technology I use. There were a bunch of people there who use Amazon. They use virtual machines. They use cloud. They didn't find containers to be a compelling thing to follow.

Gordon:  Why don't they find containers a compelling thing to follow when everyone says they're so great?

Mark:  There were a number of different reasons that I heard. Some of them were just misinformation. There were people who said, "Yeah, we knew about that with jails." In 1970s, BSD had jails in its chroot. I'm not going to go into it, but there's an answer to that. Containers are not that. That was a very old thing.

I liken that to saying, "Well, this guy, Jameson," or, whatever his name was, in France, "discovered inoculation back in the 1800s. Why do we need flu vaccines and monoclonal antibodies?"

Gordon:  It's like, "Oh, what's this cloud thing? We had time‑sharing." "Oh, virtualization. That was invented by IBM in 1960. What do we need this new?" It's this idea of, "Oh, everything's been done before."

Mark:  There are a number of things like that. There's a number of flavors. The, "Oh, Solaris had Zones. We know about that. See where that went." There were a number of responses like that. There was a number of, "Oh, it's hype," and those people aren't wrong.

It also is an incomplete answer. I agree that it's hype, but I also agree that it's important because while the hype maybe way out in front of reality, reality is way in front of what it was three or four years ago.

Gordon:  They just don't see any benefit for themselves?

Mark:  That's really the sense that I got. When I got past the people who were just naysayers for whatever reason, and I started bringing up, "Here are these tools. Here are these things I've used. Here's what I've done with it," the response was, "Well, but how does that help me?"

They're getting their developers, and they're getting their managers coming saying, "Oh, we need, well, cloud in some form." Some of them are OpenStack, some of them are Kubernetes, some of them are OpenShift, but their managers and their developers are saying, "Hey, there's this cool thing," and the sysadmins respond with the two kind of predictable responses.

One is, "Yeah, OK. I'm going to build a service for you that's work for me." The second one is, "This doesn't really help me. It gives me a lot more work. I've got to build new containers, I've got to build all of this stuff," or they would say, "Let's put our app into containers," and everyone's first response is, "Let's shove the entire application suite into one container and treat it like it's a virtual machine lite."

Everybody finds quickly that's not productive. It requires a lot more work to do refactoring. Somewhere in that process, many of them have said, "Our engineers got tired of it," or, "We got tired of it, and we just went back to the old way of doing things, because it doesn't buy us anything right now."

Gordon:  I'll get back to doing things the new way versus the old way in the moment, because I think it's an important point. There's something that those of us who promote technology often forget. Without saying these sysadmins are Luddites, they have a job to do. That job is to keep systems up, and the idea of, "Let's do this new stuff that's going to put me out on the bleeding edge, and probably get me on pager duty in the middle of the night when I'm trying to sleep." That just doesn't sound very appealing.

Mark:  As a sysadmin, I think sysadmins are a slightly different breed from many other geeks and technophiles. Sysadmins are, by their nature, conservative. They are probably the least bling attracted technophiles you'll find. In large part, they're the ones responsible for making sure that it works, and so they're going to tend to be conservative.

They'll explore a little bit, but their goal really is to make things work and to go home and not get paged.

Anything that you introduce to them that is both a lot of day work, and an opportunity to get paged, they're going to greet with a certain amount of skepticism.

Gordon:  That's absolutely fair. I don't think developers, and certainly not the move fast, and break things crowd, really appreciate that aspect of sysadmins. I think there's also an element though of, "This new stuff is going to abstract more. I already understand how the system works. It's going to create a new point of failure. It's going to complicate things. It is something else that I'm going to need to learn." That's not necessarily always the right point view either.

Mark:  No, but they're human. Their goals are to make their own lives easier. As a sysadmin, one of the other characteristics of sysadmins is that they will spend a lot of time avoiding doing a tedious task twice. They'll spend a lot of time creating a script to do something that only takes them 10 or 15 seconds to type. When they've typed it the 100th time, they get tired of it. Those things they know how to do.

When you impose something new, because it's a requirement for other things, they're going to be resistant to that until it helps them because they're inherently lazy people. I mean that in the best sense.

Gordon:  Actually, that was pretty much the topic from some Google presenter at a recent conference. I don't remember what the details of it were, or who it was exactly, but he went through the Google infrastructure, and how Google approached problems.

It was at Cloud Native Con/Kubecon actually, and they talked about how their approach was, "Oh. I've done this three times. It must be time to automate it."

Mark:  Sysadmins, I find it, and I know this is true of me, are pathologically lazy. Again, I use that in the nicest sense in that there are times when I have spent an hour, or more, understanding and encoding an automated solution to a problem that literally took me a minute a day.

It sounds like, "Well, that was a waste of an hour in a day," except that after a while, it saves me an hour.

Gordon:  We're doing a podcast right now and there's a lot of fairly repeatable manual processes associated with this. I'm putting some intro on, I put some outro on, I do some transcoding to different formats.

There's manual editing of course, and you can't really automate that. I spent a couple of days at some point, writing a Python script, and now, it's super quick and not nearly as error‑prone. "Oh, I forgot to make that file public on AWS."

Mark:  This is a characteristics of sysadmins that they do want to automate, but they are always going to use the tool they know first. Containers certainly present a real long learning curve before they start seeing the benefit. That's where a lot of the resistance really comes from.

Gordon:  It's probably worth contrasting containers with virtual machines in that regard, because the way virtual machines came in was, we were in the dot-bomb, dot-com type of era, nobody had any money, with these underutilized servers.

People just wanted to improve the utilization of those servers. They didn't want to change their operational procedures in major ways to do it. That's one of the reasons virtual machines became so popular as the approach at the time. They solved a specific problem, utilization of physical servers that no one had any money to buy, but without requiring a lot of changes.

Of course, virtual machines did evolve over time. Things like Live Migration and different things in terms of storage pooling and that kind of thing. But, fundamentally, virtual machines didn't present a big barrier to sysadmins from an operational point of view.

Mark:  All it really added was the virtual machine barrier. Once you got your virtual machine going, you could hand it off to another sysadmin or to a Puppet script and say, "That's a computer. Acts just like your other computers."

Containers are not VMs. They are processes with blinders on. Building those blinders is a new way. You'll see people who still try and treat them as VM lite. There are people who will try and stuff essentially a virtual machine inside.

They very quickly find that there's either no benefit, or it actually incurred some cost. Those people will often go back to using virtual machines because they don't want to change their paradigm. The argument of the people who actually are advocates of containers is that there is some benefit to this model.

It doesn't appear quickly and it requires a lot more retooling, both of the real tools and of the mindset of the administrators before they see the benefit.

Gordon:  We've gone into configuration management in a lot more detail in another podcast, but there are some analogs there. You can certainly do automation using scripts, but for example, as VMs came in and suddenly you're multiplying your number of "servers," by 10 or 20 or whatever the number is, doing the scripting didn't work so well any longer.

You really needed some of these newer types of tools that did things in more declarative ways, or that did things in other ways that were more suitable for very large and complex app configurations.

Mark:  That actually brings up a psychological or behavioral analog that I wanted to highlight, is that we've had situations like this before where either sysadmins or various other areas of the community were resistant to changes that some of us saw as important early.

I remember trying to convince people that they should use sudo. They're like, "What do you mean? I can't code without root. How can I possibly work as a non‑root user? Why should I use this sudo thing? I just always log in as root anyway."

It took about a decade from when I first saw sudo, to the point where it was just accepted common practice that everyone did. There were other analogs like that. Configuration management was one of them. I remember having conversations with people who would say, "I run a shop with 10 or 20 hosts. Do I need configuration management?"

Those of us in the room were going, "Yes, you absolutely do." They go, "But, I don't have time for that." I'm like, "That's why you need it." It took a while for that to catch on. I think virtual machines were actually a big influence in the adoption of configuration management in the small shops.

They went from having four or five machines to having 20, 30, or 50 VMs. Even though they had the same hardware, they saw the multiplication of the manageable units. Configuration management made sense to them. Again, that's become a common part of a sysadmin toolbox, is to know Puppet or Chef--not CF Engine so much anymore.

Ansible now is big one. You're talking about the familiar tools model. Ansible's largest appeal to a lot of people is that it looks like shell scripting, or it looks like these other scripting languages where Puppet and Chef were more traditional configuration management.

That's a bit of insight but the analog still holds. In containers, you've got people going, "Do I need that? Why do I need that?" Those of us who have worked with them a lot are going, "Yes. Yes, you need this." It's going to take something to induce them to see it.

Gordon:  You're seeing the cycle continuing with security around containers, for example. I was just reading yesterday, there was a minor possible exploit related to containers. Details don't really matter here, but there was a discussion going on in "Hacker News" that basically people were going, "Well, you could change this and that would fix this," and so forth.

Somebody wrote, "Or you can use SELinux and setting enforce=1, and this exploit can't happen."

Mark:  That's another tool that still is in the usage phase. There are still people who go, "SELinux is too hard to use." They'll disable it just as a matter of course. It still is hard to use, but most people don't need to interact with it that way most of the time.

Another barrier to entry for people creating containers is they have to think about resource usage that they didn't have to think about when everything was on the host. They just install a new thing on the host and the resource would be there.

Now, they have to make sure that all the research sources they need are inside the container. For a sysadmin who's just working on the box and trying to run some tool, the idea of putting that tool in a container really doesn't make sense for them, until they can treat the container the same way they treat the installed package.

Gordon:  Do sysadmins need to just suck it up, or is there something we can do for them?

Mark:  First thing is that, no, they don't have to suck it up. They are the ones doing the job. It's up to them to decide when to use this stuff. We can advocate and we can try and give them the tools, and we can try and make a point and help people understand.

We have the responsibility of understanding too. They are our users. If you present software to a user and it doesn't help the user, they are not going to use it, no matter who they are. We do have a responsibility within the container community to look at their usage, look at their needs, and find ways to help them.

There are a couple of projects I know about right now that are trying to address that. The Fedora Modules Project which is a project a guy here works on, Langdon White, is one of the leads on it.

I'm going to do a little aside here. One of the objections that people had to containers was they said, "Well, they're just packages. We've done this before. We know how packages work."

I would disagree that that is a sufficient answer because there are packages with other stuff on them that can work in different ways, but they had a certain point. If you're going to use containers to do sysadmin jobs, sysadmin tasks on a host, you need to be able to treat them like a package. The sysadmins need to be able to use a model similar to what they're used to.

Fedora Modules is an attempt to take packages which are commonly installed on hosts and put them into containers that can be used in a manner similar to packages. The best examples right now are things like web browsers and certain system services. The packages are trying to address early are things that sysadmins would use on a regular basis.

They would commonly install a web server, Nginx or Apache. They'd install it as an RPM, and then they'd configure a bunch of root files or a bunch of owned files. The Modules Project is trying to produce a container image which can be used the way an Nginx RPM would be used in that you say, "Container install this thing," and, "Container enable," instead of "Package install, package enable."

Now, you have an Nginx running on your box. It's running in a container instead of as a host process. The benefit, if you're using containers, is that you can run multiple Nginx on the same host without having to worry about separating the configuration files. They can run as independent things.

The Modules Project is just trying to create these containers which are analogues to host RPMs for things that are appropriate. Chrony was one I heard someone talk about, so NTP services, web services, other systems services like that that don't necessarily have to be bound, especially ones that are for users, like the Apache Server, or a MySQL Server.

There's no good reason for that to be installed on the host. This would allow a way to do it.

The second project that is trying to address it is working from the other side, from the developer side. That's the Fedora Layered Docker Image Build Service. The people in the Fedora Project have looked at the model, the build model for RPMs, and said, "Why don't we apply this to containers?"

They've created a container service, a container built service, which is an analogue to the RPM submission and build service so that instead of submitting an RPM spec which points to a GitHub repo somewhere for software, you submit a Docker container spec, and it might be a Docker file, but it might also be some other Docker build mechanism.

What you get is a professionally packaged container in a well‑known repo signed by the Fedora Project so that unlike Docker Hub where anyone can put anything out there, in the Fedora Project, it has to go through a vetting process. It has a set of standards in the same way that RPM specs have a standard. They have maintainers. They have certain kind of tracking.

These two projects, the Modules Project which works from the sysadmin's side and the container built project which work from the build developer, package developer side, those two projects together are working towards that middle where a sysadmin could choose instead of installing DNF install, or Yum install Nginx, they could do container install, or a module install Nginx.

For them, the behavior is very analogous.

Gordon:  Bring this home. I'm going to steal something that you mentioned to me a little bit ago, that we talk about having green‑field and brown‑field environments and having to deal with both of those, but from a skills level and from and perhaps more importantly from an experience level, there really is no green‑field as far as sysadmins are concerned.

You can't just assume that they're a blank slate and forget all about the processes and experience that they've acquired over decades in some cases.

Mark:  Yeah. When I said it, it was the first time I kind of thought that going, "Yeah, you can't treat them the same way."

In the container world, we have done an awful lot of work which assumed green‑field or which green‑field was imposed on us because like I said, people started trying to stuff whole applications with multiple processes into a container, and they very quickly found that that was a real problem.

They would try to tease apart the components of their application on existing application and refactor it, and they'd go, "Oh my God. This is such a pain, too. It's not really working," and they would inevitably either quit or go back to green‑field and say, "Look. Let's redevelop these as containers, as micro services from the start because of the lack of a way to migrate easily."

I think part of the reason that lack is there is that we're treating containers as the thing that runs. We're still treating containers as something we build, not something we use. As we learn the patterns, we've talked about this before, we're still in the position of learning the patterns of how containers are used well. As we learn those patterns, we're going to start eliminating variables.

We're going to start eliminating parameters. We're going to start adding defaults and assumptions, and we're going to start addressing these real world used cases in ways that they hide the minutiae that doesn't matter anymore. Someone who uses a package, who installs a package doesn't care about how it was built, doesn't care where that source code came from, doesn't care...

I mean, they can go find out, but as far as a user goes, they just Yum install a package, and they're done. That's all they should have to do, and I think we'll get there with containers. I think we'll get there with containers for sysadmins. It will be longer before we get there for container services.

I've heard somebody recently in a position of advocacy to say, "Well, Kubernetes, OpenShift, or whatever will be the new operating system." While I think some kind of container cloud infrastructure is in the offing in the long‑term, I think we have a ways to go before we get there, and think there are a whole lot of transitional states we're going to go through.

That's kind of where I work, is, "What works now?" I want to look ahead, but I need to remember that there are people doing the work now.

Gordon:  I'm actually giving a presentation at MonkiGras in a couple of weeks, and hopefully I get this podcast posted before then, about packaging, not really software packaging, but the grand history of packaging going back to pottery. One of the things I bring forward in that is that, yeah, a lot of early packaging was pretty much functional.

You put your wine in this clay jar of some sort, but really where we've progressed to with packaging is this much more almost experiential model of packaging where you make it easier to consume, easier to use, easier to have confidence in the elements that are part of that package, so many other things we're trying to do with OpenShift for both developers and operations, for example.

You're absolutely right. Containers are probably ultimately going to end up being one of those components that most people don't need to think about very much, whereas the actual packaging and the actual user experience, whether they're a sysadmin or developer, is at some higher level platform, for the most part.

Mark:  That comes back to something I said earlier with the people who object to containers, who brush aside containers on the grounds that, "They're just packages and we've been there and done that," are ignoring a very significant part of the advancement of containers. It's the contents, and it's some structure for the contents which an RPM would have.

It's got the spec for building it, and yeah, we've done those things before. What it has that traditional packages don't is that traditional packages are static. They have information about how to install themselves, which is an advance over tarballs, but they don't' have the metadata about how they're expected to be used. That's the significance in the container packaging.

We do not have semantics or we're developing container semantics for that packaging metadata which will say, "Here's how I expect this software to be used, and here's the inputs it expects. Here's the parameters it expects. Here are the other packages or the other containers it expects to interact with." I think that's what the people who think containers are just packages are overlooking.

It's an area of research that we still don't know all the answers to.

Wednesday, January 04, 2017

Optimizing the Ops in DevOps

This post is based on my recent presentation at DevOps Summit Silicon Valley in November 2016. You can see the entire presentation here

Screen Shot 2016 11 22 at 3 03 33 PM

We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. 

There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS—and all the tools that enable those things. Changes in developer practices may come up—such as developers taking ownership of code and pulling pager duty.

We also talk about culture a great deal in the context of developers and DevOps. About touchy-feely topics like empathy, trust, learning, cooperation, and responsibility. It can all be a bit kumbaya.

Screen Shot 2016 11 22 at 3 18 55 PM

What about the Ops in DevOps? Or, really, about the other constituencies who should be part of the DevOps process, workflow, and even culture? Indeed, DevSecOps is gaining some traction as a term. DevOps purists may chafe at “DevSecOps" given that security and other important practices are supposed to already be an integral part of routine DevOps workflows. But the reality is that security often gets more lip service than thoughtful and systematic integration.

But what’s really going on here is that we need to get away from thinking about Ops-as-usual (or Security-as-usual) in the DevOps context at all. This is really what Adrian Cockcroft was getting at with the NoOps term; he didn’t coin it but his post about NoOps while he was at Netflix kicked off something of an online kerfuffle. Netflix is something of a special case because they are so all-in on Amazon Web Services, but Adrian was getting at something that’s more broadly applicable. Namely that, in evolved or mature DevOps, a lot of what Ops does is put core services in place and get out of the way.

Screen Shot 2017 01 04 at 10 04 27 AM

Ironically, this runs somewhat counter to the initial image of breaking down the wall between Dev and Ops. Yes, DevOps does involve greater transparency, collaboration, and so forth to break down siloed behaviors but there’s perhaps an even stronger element of putting infrastructure, processes, and tools in place so that Devs doesn’t need to interact with Ops as much while being (even more) effective. One of the analogies I like to use is that I don’t want to streamline my interactions with a bank teller. For routine and even not so routine transactions, I just want to use an ATM or even my smartphone.

It’s up to Ops to build and operate the infrastructure supporting those streamlined transactions. Provide core services through a modern container platform. Enable effective automated developer workflows. Mitigate risk and automate security. But largely stay behind the scenes. Of course, you still want to have good communication flows between developers and operations teams; you just want to make those communications unnecessary much of the time.

(At the same time, it’s important for Dev and Ops teams to understand how they can mutually benefit by using appropriate container management and other toolchains. There’s still too much of a tendency by both groups to think of something as an “ops tool” or a “dev tool.” But that’s a topic for another day.)

Let’s look at each of those three areas.

Modern container platform

Screen Shot 2016 11 22 at 3 53 00 PM

A DevOps approach can be applied just about anywhere. But optimizing the process and optimizing cloud-native applications is best done on a modern platform. Take it as read that most IT is going to be brownfield and DevOps may even be a good bridge between existing systems, applications, and development processes and new ones. But here I’m focusing on what’s optimized for new apps and new approaches.

You need scale-out architectures to meet highly elastic service requirements. Application designs with significant scale-up components simply aren’t able to accommodate shifting capacity needs. 

Everything is software-defined because software functions, such as network function virtualization and software-defined storage, are much more flexible than when the same functions are embedded in hardware.

The focus is on applications composed of loosely-coupled services because large monolithic applications can be fragile and can’t be updated quickly.

A modern container platform enables lightweight iterative software development and deployment in part because modern applications are often short-lived and require frequent refreshes and replacements. 

As I wrote about in The State of Platform-as-a-Service 2016, a PaaS like Red Hat’s OpenShift has evolved to be this modern container platform, embracing and integrating docker-format containers, kubernetes for orchestration and using Red Hat CloudForms (based on the ManageIQ upstream project) for open source hybrid cloud management.

Automated developer workflows

Screen Shot 2017 01 04 at 10 23 51 AM

When thinking about the toolchain associated with DevOps, a good place to start is the automation of the continuous integration/continuous delivery (CICD) pipeline. The end goal is to make automation pervasive and consistent using a common language across both classic and cloud-native IT. For example, Ansible allows configurations to be expressed as “playbooks” in a data format that can be read by both humans and machines. This makes them easy to audit with other programs, and easy for nondevelopers to read and understand.

A typical automated workflow begins with operations provisioning a containerized development environment, whether a PaaS or more customized environment. This provides an example of how a mature DevOps process separates operations and developer concerns; by providing developers with a dynamic self-service environment, operations can focus on deploying and running stable, scalable infrastructure while developers focus on writing code.

Automation then ensures that the application can be repeatedly deployed. Many different types of tools are integrated into the DevOps workflow at this point. For example:

  • Code repositories, like Git
  • Container development tools to convert code in a repository into a portable containerized image that includes any required dependencies
  • Vagrant for creating and configuring lightweight, reproducible, and portable development environments¥IDEs like Eclipse
  • CICD software like Jenkins

Mature DevOps systems may even push the code directly to production once it has passed automated testing. But this isn’t about removing Ops from its role of ensuring stable and robust production systems. Rather, it’s about automating the processes to ensure that deployed code meets set criteria without ops needing to be directly involved with each deployment.

Mitigate risk and automate security

Screen Shot 2017 01 04 at 10 32 15 AM

Ops is also ultimately chartered with protecting the business. This doesn’t mean eliminating all risk—which can’t be done. But it does mean mitigating risk which is accomplished in part by managing the software supply chain and by automating away sources of manual error. 

That’s not to say that security is purely an ops concern—hence the aforementioned DevSecOps term. Creating a mindset in which everyone is responsible for security is key as is the practice of building security into development processes. Security must change from a defensive to an offensive posture that is both automated and constant.

Among the practices to be followed in such a proactive environment are:

  • Components built from source code using a secure, stable, reproducible build environment
  • Careful selection, configuration, and security tracking of packages
  • Automated analysis and enforcement of security practices
  • Active participation in upstream and community involvement
  • Thoroughly validated vulnerability management process

I think this quote from Gartner captures the required dynamic well: "Our goal as information security architects must be to automatically incorporate security controls without manual configuration throughout this cycle in a way that is as transparent as possible to DevOps teams and doesn't impede DevOps agility, but fulfills our legal and regulatory compliance requirements as well as manages risk.” (Gartner. DevSecOps: How to Seamlessly Integrate Security Into DevOps. September 2016. G00315283)



Dev: Nelson Pavlosky/flickr under CC http://www.flickr.com/photos/skyfaller/113796919/

Ops: Leonardo Rizzi/flickr under CC http://www.flickr.com/photos/stars6/4381851322/

Piggy bank: https://www.flickr.com/photos/marcmos/3644751092

Stop: https://www.flickr.com/photos/r_grandmorin/6922697037

DevOps wall: Cisco


Tuesday, November 15, 2016

Data and DevOps with Splunk's Andi Mann

AndiMann HiRes

Andi Mann is Chief Technology Advocate at Splunk. In this podcast, he discusses some of the ways in which data plays an important role in DevOps. I’ve known Andi for ages since we were both IT industry analysts and we had a chance to sit down at CloudExpo/DevOps Summit/IoT Summit in Santa Clara where Andi was chairing a DevOps track and I was one of the speakers. (We also did a data and DevOps panel together on the main stage but that video doesn’t seem to be up yet. I’ll post once it is.)

Among the topics we tackle are choosing appropriate metrics that align with the business rather than just technical measures, creating feedback loops, using data to promote accountability, and DevSecOps.

Show notes:

Listen to MP3 (22:13)

Listen to OGG (22:13)


Gordon Haff:  I'm sitting down here with an old analyst mate of mine, Andi Mann. Also, formally of CA, also the author of some books, and now he is the chief technology advocate with Splunk. What we're going to talk about today is data in DevOps. Welcome, Andi.

Andi Mann:  A lot of the customers I talk to, who are doing DevOps in various versions using Splunk...It boils down to three key areas is that they really want to know about, the metrics that matter for them.

The first is really about how fast are they? What's their cycle time? How quick does it take for an idea to get in front of a customer? How long does it take someone in business to come up with something and then basically make money from it, or, in government, they service their citizens with it? That cycle time is really important, the velocity of delivery.

The second key area that people look at is around the quality of what they're delivering. Are they doing good? Are they delivering good applications? Are they creating downtime? Are they having availability issues? Is one release better than another?

The third area is really around what sort of impact do they have? Measuring real business goals, MBOs, things like revenue and customer sign‑up rates, and cart fulfillment, and cart abandonment. These sorts of things. Those are the metrics that my customers, the people I talk to are interested in for DevOps, closing those feedback loops in those three areas.

Gordon:  One of the things I find interesting, what you just said, Andi, is that you read these DevOps surveys, DevOps reports, and often the metrics, or at least what they're calling metrics, are framed in much more technical terms. How many releases do we have per year, or per week, per hour?

What's the failure rate? How quickly can we do builds? How quickly can we integrate? Which, I think to your point, are probably worth measuring, but they're really...The ultimate goal of DevOps is not to release software faster.

Andi:  Exactly. It's interesting because you do look at these metrics in isolation, and they matter. All this matters. 10 deploys a day, we all know that from 2009 in Velocity. That matters, but 10 deploys a day is no good if they're all bad deploys. You need to measure quality in that.

But even if it's a good quality deploy and you do it quickly, if it's not moving the needle on what your business wants you to be doing, then again, it doesn't matter. I think it's actually really important to connect these together so you really are getting metrics, correlating metrics, that matter across the whole range to really understand whether you're doing good or not.

Gordon:  One of my favorite Dilbert cartoons, I don't remember the exact wording but to the effect of...Pointy Hair goes, "We're now going to measure you on the number of lines of you code you write," and Wally says, "I'm going to off to write myself a new car today."

Andi:  [laughs] Yeah, exactly. That's one of the things that I actually do measure. We measure it internally. A bunch of our customers do actually measure code volume. There's a couple of interesting reasons for that. Especially in a DevOps and Agile mode, actually delivering too much code can be a signifier that you're doing things badly.

You're writing too much code, you're doing too much in one release rather than doing small, iterative releases. It can also signify that one person has too much of a workload. When you think about DevOps and the concepts around empathy and wanting to make sure that life doesn't suck for everyone, when one person is doing all the work, that sucks for them.

There are actually good things that come out of measuring code volume [laughs] but saying that more code equals better code, equals a bonus? That's a really bad thing. [laughs]

Gordon:   I think a lot of people tend to lump data metrics into this one big bucket. As we've had discussions before, there are these business metrics which have to be somehow connected to things.

It's not clear that overall company revenue is necessarily a good DevOps metric. Some of the other things you mentioned certainly are. In many cases, it does make sense to collect a lot of underlying data for data analytics and things like that. Then, you also have alerts.

Andi:  Yeah, the business stuff is really interesting. I know one of our customers releases the software-as-a-service. They're a SaaS company, cloud native and all that. Their developers actually do care about who uses specific features.

They'll implement a feature. They do canary releases. They'll implement feature on 10 out of a 1,000 servers, or whatever. Certain volume or percentage of their customers will get access to it. Then they'll measure using Splunk the way that those features are being used or not. They also measure the satisfaction of those customers.

They've got these nice smileys, and tick marks, and stuff that say, "Yes, I enjoyed using this feature." They can correlate that together, and it actually means the next day after doing a commit, after doing a release, they actually know whether the business use cases being satisfied, which is very cool.

I know a television company in the UK that we work with. They actually send reports on a weekly basis, I think it is, to their marketing department, based on whether users are using the website, what they're doing on the website, whether they're clicking through on competitions.

That's actually really important, but obviously mostly what people are doing in using data and the feedback...Closing the feedback loops is what I'm talking about here at DevOps Summit.

They're closing the feedback loop around those technical measurements.

Am I creating more bugs? Am I creating availability issues? Am I creating problems with uptime? Am I closing out the feature set that is in the story or in the epic that I was promising to do? Partially, it's also around this accountability to each other. Am I doing what I'd promised I'd do?

Gordon:  Talk a little more about accountability.

Andi:  Yeah, that's one of my soapboxes at the moment. I see a lot of the empowerment that DevOps gives developers to make decisions. I think that's great, especially in companies where you've got systems thinking and they understand their role in the organization and what it means to deliver good outputs for their customers.

You give them a lot of responsibility. Their manager is the leader. You give your developers, and your operations team, and those DevOps professionals a lot of responsibility and lot of empowerment to do the right thing.

Also, I think that there's a need for them to be accountable for doing the right thing as well. Especially as DevOps grows in larger organizations and there are more and more people involved. Also, with the concept that DevOps is about helping and making sure that each other is having a good experience at their life and their work.

As a developer, you're not making sure that operators are getting called out late at night, and all this sort of stuff. If DevOps is about helping to work with each other, to collaborate, to communicate better, to make sure each other's lives get better as Dev and Ops professionals, then I think you need to be accountable in two ways.

You need to be accountable to your business, which often means being accountable to your manager for doing the work that you're meant to do, and doing the work you promised you would, within the bounds of the responsibility you've been given.

It's also being accountable to each other, from doing good work, and doing the right work in ways that helps your whole team move forward, and makes everyone else's life positive. I think we talk a lot about empowerment and enablement. We don't really talk much about the flip side of that, which I think is that accountability.

Gordon:  I think the culture talk around DevOps, and we did have lots of discussions around culture and some of the ways that it can be overextended and over‑applied. Yeah, it can turn into this "don't fear failure," empathy, transparency, etc. Unicorns farting rainbows. This very touchy feely, everyone's happy and sings "Kumbaya," but you are, at the end of the day, being paid to produce business outcomes.

There does need to be some accountability there. If you crash the SQL server three weekends in a row, and call in Ops, somebody's going to have to talk with you, as they should.

Andi:  Exactly. Especially when you talk about the DevOps toolchain and the life cycle of software. It's a very complex and opaque theme to try to see what's going on at every stage, especially if you're a manager who's not necessarily fully fluent in specific tools. They can't dig into the specific tools to have a look at that.

I think reporting up to your management and reporting to each other and saying, "I introduced these bugs and I'm sorry for it. I won't do it again." By the same token, "I introduced these newest features, and they were really successful. We should all celebrate that as a team."

I think that accountability is actually really important. You'll see this in manufacturing as well where we get a lot of our examples from. You'll see that if one person makes the same mistake several times, then they'll get into a training program, or they'll get different mentoring.

Maybe they'll move into a different part of the line where they're better suited, and their skills are better suited. You don't know how to make your team better if you're not being accountable to each other, and to your management.

That's, I think, something we've got to step up to as DevOps professionals for want of a better term, is how do we be accountable to each other, and to the company that pays us as you said to do the job?

Gordon:  You just talked about manufacturing. You just mentioned quality, and I think that's a pretty good segue because we often think about DevOps primarily, well, through the lens of developer for one thing, but that's another topic for another day.

We also tend to view DevOps, first and foremost, through the lens of this velocity, business agility, and so forth, but there is a very important quality component there as well. What are some of the ways that data can help to surface that quality component?

Andi:  Absolutely. Some of the things we're looking at ‑‑ and our customers are doing a lot of this at the moment ‑‑ is looking at areas like code coverage and tests, number of defects, defect rates per release. Looking at the aggregating and correlating the quality metrics out of multiple test and scanning tools.

Doing static analysis and looking at the defect rates, doing dynamic analysis, and then also looking at the defect rates, as well as application performance and health scores. Looking at the performance in terms of resource utilization, response time, availability, execution failures, and so forth.

Comparing current release in production with next release just about to come forward, and being able to run that over time, so you can see whether you're making quality improvements over time.

If you're able to actually give your application a health score, and then you can measure that not just in production, but also in staging, or pre‑prod, whatever you want to call it, then you can start to make sure that you're getting better with every release. Your quality is going up with every release.

You can do with actual data, real measurements, so coming out of these testing tools, as well as coming out of actually running that in a stood up environment. There's lots of feedback loops you can close there.

Once you start to find problems as you find them especially in production, but also in pre‑prod and staging, feeding those things back into the test cycle so that you never find the same mistake twice, because the first time you find it, the next time you'd test for it.

Gordon:  This idea of doing things incrementally in stages, before they hit production, is really important from a security perspective as well. I was just having a conversation with one of my colleagues, or actually several of my colleagues, about this kind of tension between the traditional security guy who is sort of, "Stop. Stop. Don't push it out there," and this idea of whether you like the term or not, DevSecOps, where security gets baked in, and added incrementally.

What we were saying, and what was really coming out as we were having this discussion was that while the reason there's this tension or maybe disconnect is from the security guys' point of view, to a degree, the serious security flaws are pushed out into production.

Well, that is something that simply needs to be stopped to a degree that you can tolerate failures and errors in security that don't hit the actual production environment, because you found them through automated testing, or whatever. Then that makes more sense as this incremental, and sometimes breaking things sort of process.

Andi:  Yeah. Absolutely. This is actually something I've done a little bit of work, and most of the work is being done by someone that you probably know well, Ed Haletky of the TVP, @Texiwill on Twitter.

He's done a bunch of work, and I've put my two cents worth, and it's probably worth maybe one. Looking at security, and security testing, pin testing, code quality testing, so finding things like potential SQL injection, these sorts of things.

Also using some of those tools like Fortifywhich will do quality of code scanning for security purposes. You can start to shift left in that respect, but also continuing to get inputs from security testing even post release. There's no reason why security testing can't keep going even after you've released.

You can get to a certain coverage rate. This is where data helps. You get to a 90 percent, or a 92 percent, or a 95 percent coverage rate, or confidence level if you will. You go, "OK, I'm ready to release. I know that the remaining five percent is potentially low impact, or low risk. I'll put it out there anyway, but continuing to test."

There's some really interesting work out there that Ed's published about cloud, cloud promotion, and cloud delivery that actually really focuses on using these metrics from security testing, both pre and post release, which I think is actually really important.

Gordon:  We're going to be hearing a lot more about this whole security angle everywhere. This is partly an IoT show. We've heard a lot about security. I'm not sure we've heard a lot of solutions, but we've heard a lot about security.

Obviously, it is a big part of the DevOps discussion. It's a big, scary world out there, and it's pretty universally recognized that having an auditor sign off once a year, and then you don't think about security for that application for another six months or whatever, really doesn't work today.

Andi:  Yeah. It's not my joke. I saw someone post it the other day. "What did you get owned by? Your toaster or your fridge?" It's so true, especially in IoT, but in a DevOps perspective, or DevOps context, being able to do that continuous security testing, I think, is really important, and bring security a shift left.

We talk about a shift left in all sorts of other areas, and we're doing it with QA which I think is awesome. We need to start doing it more with security, I believe. At Splunk, we do have a whole security practice around incident event monitoring, or unused behavior analytics. Being able to start to apply some of that in the test, and pre‑prod and staging environment I think is really important.

Being able to do some automated audit reporting around what is happening, penetrations, security violations, or passwords, or PII exposure, potential hard coded passwords, stuff like that, there's a bunch of stuff that developers could be, and should be responsible for that actually make security pro's life easier. Not harder. I think there's a lot of work yet to be done on that.

Gordon:  Absolutely. I'd go back to DevSecOps. I think there's this school of thought that, well, if you read the Phoenix Project properly, you wouldn't be having to have this discussion. Know security was baked in. Meanwhile in the real world, security has tended to be this separate profession.

We were both at DevOpsDays London. I still remember security professional I guess in his 40s, standing up in an open space, and go, "I'm one of those security guys who's been getting in your way. You know, this is the first time I've ever been to an IT conference that wasn't purely a security conference."

I love that story. Certainly not to pick on that guy. That's quite brave of him, getting up like that. I think that's such a perfect illustration of how security has operated in his own world as this gatekeeper to releasing applications.

Andi:  Yeah. People joke about IT being a department of no. Security has that moniker for fear or not. Obviously, security teams are just looking out to protect the business. That's their job. Having them in the tent, I think, is a better option, and we started to bring other teams into the tent of DevOps.

I actually gave a presentation. You can find it online at the Splunk user conference, that was titled something along the lines of "Biz PMO Dev Sec QA Biz Ops," or something crazy like that, about broadening the tent of DevOps.

Security's got to come into this tent. Being a security pro into your team, into your scrum, that's got to be a good start, doesn't it?

Gordon:  Right, even if they're not in the meeting in the stand‑up every week, or every day, at least having them be as part of the team. Just like there used to be a business analyst who's a part of the team. Our product and technologies operations, their DevOps story.

I call it the "Banana Pickle Story," because they would get asked for a banana, and as Katrinka describes it, six months later, they deliver this pickle. Really, their DevOps story...Again, the business level, because it's what matters to me.

They used a lot of technology like OpenShift, Platform Service, and Ansible for automation , things like that. But again, they were really focused on the business story of how do we get the stakeholders iterating with us. "Oops, that banana's looking a little green. Let's dial that back to yellow and get on with the other things."

Andi:  Yeah, and this is the agile model for development, is getting someone from the business...You're creating an MVP and getting someone from the business to evaluate it, and continue to iterate with their advice.

You know that you're creating the right thing as you're creating it, rather than finding out in six‑months time, that you've created a pickle, instead of banana. [laughs] I love that analogy.

We should be doing that more and more with security. If security is saying no to you all the time, then maybe you're not inviting them to the party as much as you should, so that they can say yes iteratively, rather than one big no at the end.

Gordon:  Right. Just to cap off this podcast. In order to prove to security, much less external auditors and to prove to these other stakeholders, you need data.

Andi:  Absolutely. Exactly right. This is fundamental to what I believe. We cannot continue making decisions based on "I feel that this is the right thing to do. I think we're going to have good results here." We're living in a society that's driven by data and facts. Especially as developers or IT professionals, we need to have these feedback loops based on real data.

Not just people coming back and saying "I don't feel like you did the right thing. I don't think that this was good. I think our release worked and helped our customers." We need to come back and stop having these back‑and‑forths over opinions.

There's some very crude statements about, "Everyone's got opinions," right? I like to say, "In God we trust. All others bring data." That's how we get these real feedback loop in a system's mode, getting feedback from productions systems, from customer interaction, from the security violations and the passes that we do make.

From the coverage, to know if we are doing the right thing in terms of speed, in terms of quality, in terms of impacting our business, that's where data has a huge role to play. It’s those feedback loops that DevOps depends on.

Wednesday, November 09, 2016

Podcast: Open source ecosystems with Red Hat's Diane Mueller

Dianemueller 1378481891 37
Traditionally, in open source, there was a lot of emphasis on singular projects. Today, it's much more about how multiple communities interact and build on each other. In this podcast recorded at the OpsenShift Commons Gathering and Kubecon in Seattle, Red Hat's Diane Mueller discusses what she's learned as Director for Community Development at OpenShift and what's coming next.
Show notes:

Listen to MP3 (17:39)
Listen to OGG (17:39)

Tuesday, November 08, 2016

Presentation: Optimizing the Ops in DevOps

As DevOps practices have been put into wide use, it's become evident that developers and operations aren't merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations - together with other practices such as security - to collaborate and coexist with less overhead and conflict than in the past.

In my session at @DevOpsSummit at 19th Cloud Expo, I discussed what modern operational practices look like in a world in which applications are more loosely coupled, are developed using DevOps approaches, and are deployed on software-defined, and often containerized, infrastructures - and where operations itself is increasingly another "as a service" capability from the perspective of developers.

How does the operations tool chest change? How does the required skill set differ? How are the interactions between operations and other IT and business organizations different from in the past? How can operations provide the confidence to the entire organization that this new pipeline is still delivering non-functional requirements such as regulatory compliance and a secure and certified operating environment? How does operations safely consume vendor and upstream dependencies while meeting developer desires for the latest and greatest?

Operations is more important than ever for a business to derive value from its IT organization. But the roles and the goals of operations are significantly different than they were historically.

Tuesday, November 01, 2016

The state of Platform-as-a-Service 2016

if you're ignoring PaaS because early offerings didn’t meet your needs or because you’re more focused on operations than developers, you should look again. It enables ops to enable developers efficiently and to manage an underlying container infrastructure.

Circulating in drafts beginning in 2009, some variant of the NIST Cloud Computing definition used to be de rigueur in just about every cloud computing presentation. Among other terms, this document defined Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software

DevOps 1

-as-a-Service (SaaS) and, even as technology has morphed and advanced, this is the taxonomy that we still largely accept and adhere to today.

That said, PaaS was never as crisply defined as IaaS and SaaS because “platform” was never as crisply defined as infrastructure or (end-user) software. For example, some platforms were specific to a SaaS, such as Salesforce.

Others, specifically the online platforms that were most associated with the PaaS term early on, were typically tied to particular languages and frameworks. These PaaSs were very “opinionated.” For example, the original Google App Engine supported an environment that was just (and just almost) Python and Heroku was all about Ruby. Heroku's twelve-factor app manifesto was an additional type of opinion; write your apps this way or they won’t really be suitable for the platform. These platforms may not have been just for hobbyists, but they were certainly much more suited to developer prototyping and experimentation than production deployments. 

At the same time, platform was also used more broadly to cover the integration of a range of middleware, languages, frameworks, other tools, and architecture decisions (such as persistent storage) that a developer might use to create both web-centric and more traditional enterprise applications. Furthermore, such PaaSs as OpenShift remained not only “polyglot” but also allowed for an increasing range of deployment types both on-premise and in multi-tenant and dedicated online environments. (As well as on developer laptops using the upstream open source OpenShift Origin project.)

However, the various approaches to PaaS did have a common thread. They were bundles of technology that were largely framed as appealing to developers.

The developer angle was never the whole story though. Back in 2013, my Red Hat colleague Gunnar Hellekson talked with me about some of the operational benefits of a PaaS in government.

One of the greatest benefits of a PaaS is its ability to create a bright line between what's "operations" and what's "development". In other words, what's "yours" and what's "theirs".

Things get complicated and expensive when that line blurs: developers demand tweaks to kernel settings, particular hardware, etc. which fly in the face of any standardization or automation effort. Operations, on the other hand, creates inflexible rules for development platforms that prevent developers from doing their jobs. PaaS decouples these two, and permits each group to do what they're good at.

If you've outsourced your operations or development, this problem gets worse because any idiosyncrasies on the ops or the development side create friction when sourcing work to alternate vendors.

By using a PaaS, you make it perfectly clear who's responsible for what: above the PaaS line, developers can do whatever they like in the context of the PaaS platform, and it will automatically comply with operations standards. Below the line, operations can implement whatever they like, choose whatever vendors they like, as long as they're delivering a functional PaaS environment.

We spend a lot of time talking about why PaaS is great for developers. I think it's even better for procurements, architecture, and budget.

Today, with the rise of DevOps on one hand and containers on the other, it’s increasingly clear that a PaaS can be the sum of parts that are of direct interest mostly to developers and parts that are of direct interest mostly to operations. 

DevOps both leads to change and reflects change in a couple of areas. 

First is the number of tools that organizations are bringing into their DevOps (or DevSecOps if you prefer) software delivery workflow. Most obvious is the continuous integration/continuous delivery pipeline, most notably with Jenkins. But there are also any number of testing, source code control, collaboration, and monitoring tools that need to be integrated into the workflow. At the same time, developers still want their self-service provisioning with an overall user experience that’s tailored to how they work. A PaaS is an obvious integration and aggregation point for this tooling.

DevOps is also changing the way that developers and operations work with each other. Early DevOps discussions often focused on breaking down the wall between Dev and Ops. But this isn’t quite right. DevOps does indeed embody cultural elements such as collaboration and cooperation across teams—including Dev and Ops. But there’s also a recognition that the best form of communication is sometimes eliminating the need to communicate at all. To the degree that Ops can build a self-service platform for developers and get out of the way, that can be more effective than improving how dev and ops can work together. I don’t want to communicate more effectively with a bank teller; I want to use an ATM (or skip cash entirely).

Containers have also influenced how some organizations are thinking about PaaS. Many PaaS solutions (including OpenShift) have been based on containers from the beginning. But each platform did their own implementation of containers; in OpenShift it was Gears, in Heroku it was Dynos, in CloudFoundry it was Warden (now Garden) containers.  

As the industry moved to a container standard (Docker-format with standardization through the Open Container Initiative (OCI)), OpenShift moved with it. Red Hat has helped drive that movement along with many others though not all PaaS platforms have participated in the shift to standards. 

With container formats, runtimes, and orchestration increasingly standardized through the OCI and Cloud Native Computing Foundation (where kubernetes is hosted), there’s increasing interest from many ops teams in deploying a tested and integrated bundle of these technologies outside of any specific development environment initiatives within their companies.

That’s because the huge amount of technological innovation happening around containers and DevOps can be something of a double-edged sword. On the one hand it creates enormous possibilities for new types of applications running on a very dynamic and flexible platform. At the same time, channeling and packaging the rapid change happening across a plethora of open source projects isn’t easy—and can end up being a distraction from the ultimate business goals.

As a result, at Red Hat, we talk to customers who view OpenShift primarily through the lens of a container management platform rather than the more traditional developer-centric PaaS view. There’s still a developer angle of course—a platform isn’t much use unless you’re going to run applications on it. But sometimes there are already developer tooling and workflows in place and the pressing need is to deploy a container platform using Docker-format containers and kubernetes orchestration without having to assemble these from upstream community bits and support them in-house.

An integrated platform leads to real savings. For example, based on a set of interviews, IDC found that:

IT organizations that want to decouple application dependencies from the underlying infrastructure areadopting container technology as a way to migrate and deploy applications across multiple cloud environments and datacenter footprints. OpenShift provides a consistent application development and deployment platform, regardless of the underlying infrastructure, andprovides operations teams with a scalable, secure, and enterprise-grade application platformand unified container and cloud management capabilities.

Among its quantitative findings was 35 percent less IT staff time required per application deployed. [1]

In short, PaaS remains a central part of the cloud computing discussion even if the name is sometimes discarded for something more specific or descriptive such as container platform. What’s perhaps changed the most is the recognition that PaaS isn’t just a tool for developers. It’s also a way for ops to enable developers most efficiently and to manage the underlying container infrastructure.

[1] I’ve got some other good data points and outside perspectives that I’ll share in a future post.

Saturday, October 29, 2016

Cape Cod: Fall of 2016

Nauset Beach, Cape Cod

It’s been a few years—partly because I’ve tended to be doing a lot of October travel of late—but the stars aligned again for a Cape Cod trip this fall. We’ve pretty much settled on early October for this trip when we do it. The summer mobs are gone but the Cape is still mostly open for business and the weather is typically still nice.

We tend to stay in Eastham or Wellfleet on the lower Cape (i.e. further out on the Cape). This gets you out to where the interesting beaches and other outdoor attractions are, is easily accessible to Provincetown, but is removed from the general craziness and higher prices of P’town. Of course, if you’d rather ditch your car and hang out, P’town would be the better choice. We’ve stayed at various motels near the bike path (Cape Cod Rail Trail), though I didn’t get onto my inline skates this time. The Even'tide in Wellfleet has proven to be a good choice. I’ve also stayed at the Town Crier in Eastham (next to Arnold’s, a well-known “clam shack” in the area). There are a fair number of choices along the main drag (Route 6).


Upscale casual, but reasonably priced, is the Wicked Oyster in Wellfleet, near but not on the harbor. This is the second time I’ve eaten there and it’s probably been the best meal I had on both trips. This year I had a lovely pan seared cod over a light cream broth with littlenecks, sunchokes, leeks and bacon. My dinner companion had seared scallops over a sweet pea, artichoke and goat cheese risotto, topped with pea tendrils. Both were delicious—fresh and perfectly cooked. My clam chowder starter was also excellent; I’d have been tempted by raw Wellfleet oysters but, alas, the beds were temporarily closed due to a norovirus outbreak. The one comment I would add is that, current oyster issues aside, the Wicked Oyster is light on oyster dishes given its name and location. So don’t go in with your heart set on a bunch of oyster eating options.

Wellfleet, Cape Cod

The Bookstore and Restaurant is on the harbor—although the early October weather was such that it was a bit chilly to sit outside and fully absorb the view. I decided to have a couple of generously sized appetizers: a dozen raw littlenecks followed by an Oyster stew. My friend started with the Oysters Rockefeller followed by the Cranberry-walnut crusted baked cod. It was all good—even if Antoine’s in New Orleans has forever spoiled me for anyone else’s Oysters Rockefeller. A couple of their seafood stews looked very inviting as well. I just didn’t have the appetite that night for what looked like very generous portions.

Finally, on our P’town day, we ate at Ross’ Grill, which was fine. The views of the harbor were great and the food was above average. The calamari appetizer had a nice light batter that didn’t get in the way of enjoying the thick calamari slices. The soy-ginger sauce was sort of thin but an improvement over the marinara/cocktail sauce one often gets. And I enjoyed the roast duck special with a nice berry sauce. My dinner companion enjoyed her seafood stew less though, by her own admission, it just wasn’t really what she expected. (She had in mind one of the creamier stews at the Bookstore rather than a fairly traditional Portuguese Fish Stew with lots of tomato and a thin broth.) It probably didn’t help that our waiter was a bit perfunctory and didn’t volunteer a whole lot of information in response to questions. 


As an aside, I’ll note that we ran into something this time that was unfamiliar from pervious visits. The tide was high around mid-day and these were astronomically higher tides. As a result, for one of hikes, we had to shift our timeline and the other was just challenging. Water shoes might have helped in one case but these are big tides on often very gently-sloping sands.

The following are a few favorite places, both from this trip and prior ones.

Wellfleet Bay Wildlife Sanctuary - Mass Audubon. This is one of Mass Audubon’s better sanctuaries. (I also recommend Ipswich and Wachusett Meadows.) It’s 937 acres with 5 miles of trails. A boardwalk brings you out to the ocean. It even has one or two prickly pear cactus plants, believe it or not!

Great Head hill, Cape Cod

Great Island in Wellfleet is one of the longer hikes on the Cape. It’s eight miles or so depending upon the route you take and how far you go (and how far the tide lets you go). There are a couple of monuments on the island, an old tavern site (though in typical New England manner there’s not much left to see other than a cellar hole), shore birds, and (mostly) beach walking. Don’t get stuck out at Jeremy Point by the high tide but it’s also difficult to even get out to the tavern site at least an hour or so either side of high tide.

Wood End and (optionally) Long Point Lighthouses from Pilgrims First Landing Park in Provincetown. There’s some limited free and unmetered parking at the small rotary. Otherwise you’ll need to head toward the town and find a parking lot.  A long breakwater composed of granite blocks connects the small park to the spit of land with the lighthouses. Walking over the breakwater is mostly straightforward but a few sections get covered at high tide and others get a bit rough (i.e. I wouldn’t personally try it in sandals). The breakwater takes you to near Wood End Light. You can extend the hike by walking over to Long Point Light.

Race Point. Cape Cod, MA

Another hike, which I didn’t do this year, is Race Point Beach and Race Point Light. (Now privately owned. Tours are sometimes offered and the keeper’s house is available for rent.) I don’t really remember the details of hiking it—though I do remember the lighthouse and beach, but this says it’s about 8 miles.

The Marconi Wireless Station Site is in Wellfleet. Not many original artifacts remain but it’s worth a visit even if some of the interpretive displays were removed a few years ago and the main shelter removed because it was going to eventually fall off the cliff. Also nearby are Marconi Beach and the Atlantic White Cedar Swamp Trail, a particularly nice excursion for a rainy day or if you just want a short walk before you drive home. 

Coast Guard Beach in Eastham also comes recommended. A shuttle runs there through Labor Day. However, be aware that after the shuttle stops running, there is a very limited parking area next to the old Coast Guard Station and no other options remotely close. So, in the somewhat off-season, it can be difficult to park there. 

Also nearby are Nauset Light and the Three Sisters, older lighthouses from the area that have been reunited away from the ocean. 

Thursday, October 13, 2016

Open source and OpenShift in government with Red Hat's David Egts

Red Hat's Chief Technologist for the North American public sector, David Egts, sat down with me to discuss some of the trends he's seeing in the public sector. In addition to being a podcaster himself (The Dave and Gunnar Show), David has years of experience working with government and related public sector organizations at all levels. In this show, he shares some of the trends he's been seeing around open source (such as the White House open source policy), the collaboration around OpenSCAP, how OpenShift is being used to manage containers, and the upcoming Red Hat Government Symposium in Washington DC.

Show notes:

MP3 audio (18:01)
OGG audio (18:01)


Gordon Haff: Today I'm joined by David Egts who's the Chief Technologist for the North American Public Sector at Red Hat. He's going to have some great insights to share with us about how government, at various levels, is adopting cloud and container technology.
Welcome, David.
David Egts:  Hey, Gordon. Glad to be here. A big fan of the show, so it's great to finally be on it after all the episodes I've listened to. Thanks for having me.
Gordon:  I should mention at this point, and we'll have a link in the show notes, that David is the co‑host with Gunnar Hellekson of his own podcast. Tell us a little bit about your podcast.
David:  It's "The Dave and Gunner Show." If people go to dgshow.org you could hear the podcast where I interview a bunch of people in the open source community, people at Red Hat.
A lot of the time Gunnar and I will just get on and we'll just talk about the tech news of the day, and parenting, and all kind of other fun things like that. I do have to admit, though, the podcast wouldn't exist if it wasn't for yours being the inspiration to get things going, so thank you for all the work you've done.
Gordon:  Thanks, David. We're going to talk about a number of cloud, and government, and policy things on this show, but let's start talking about something specific. Namely, that's container adoption in the government, specifically around Red Hat OpenShift.
David: In Public Sector, OpenShift interest taking off like crazy. I think the reason for it is that the folks in government that I've been talking to, when we talk about having a container strategy, they know they want to have one, but they often don't have the time or the resources to be able to roll their own container platform themselves.
They see all of this really hot innovation coming out of open source communities and all this hot software coming out of Silicon Valley from a lot of start‑ups. Then they see products like OpenShift Container Platform, which builds upon things like docker, builds on Kubernetes, and they see that as an integrated solution. They really are flocking to embrace it.
They're a bunch of customer success stories that we have that we can talk about that are really fun.
Gordon:  Let's get to those in a second. I did want to just make one point to your point about essentially making container adoption easy. This really is not just a government type of thing. We see this at a lot of customers who start out, "Whoa, if Google can do it themselves, we can do it ourselves, too." They go through an iteration and find this isn't really that easy to do.
David:  No, absolutely. Then also you end up building this snowflake that you can't put an ad in the paper and hire somebody to do this, or send them somewhere for training. You incur all this technical debt. Whereas, if you have an engineered solution that you can get training for or you could hire somebody for, it's really, really powerful.
A lot of people really focus on the mission of what they're working on.
Gordon:  Tell us some specific examples that you've been working on and that you can talk about there, out in the field.
David:  Yeah, one of my favorite ones. I actually did a podcast on The Dave and Gunner Show. We interviewed the Carolina CloudApps folks, the team at University of North Carolina. They're providing OpenShift as a service to all of the students, and faculty, and researchers at UNC.
It's really neat to see a bunch of the things that they're doing with, as far as container densities that they're getting. They're running over a hundred apps per container host. Where, if you think about that in the traditional virtualization base, getting like a 10:1 ratio of virtualized systems per hypervisor was great, but to get 100:1 is just amazing.
Then there are other things, too, as far as the range of people that they have to work with where it's like 18‑year‑old students that are just brand new freshmen to people approaching their retirement years in the faculty.
Being able to come up with documentation, and building a community, and getting people to adopt the software in a very easy way was a really neat challenge for them, which I thought was pretty amazing. Then the last thing that I thought was really neat was the whole thing.
For any sort of IT organization, you need to be very, very compelling or risk being replaced by Shadow IT by providing something like a container platform, like Carolina CloudApps does.
That allows them to be really relevant and deliver a lot of value to the students, and faculty, and the researchers to prevent them from even considering going with something from a third party or spinning up something in your dorm room.
Gordon:  What are some of the lessons that you would say that you've learned, that Red Hat's learned, that the customers have learned as we've gone through this process of what's rather a new set of technologies?
David:  I think security is one of the big things that I've found out. Just because people are moving into containers and you're sticking everything into a container, the security burden shifts from being mostly the responsibility of the operations team to being a shared responsibility between the development and the operations team.
You can't just flip a container over the wall, hand it to ops, and then have it go into production. It can't be these black box containers you give over. You need to move some of that security discipline over to the development side, so in the CICD processes the same way that you do unit tests to make sure that your code behaves properly.
You also want to do security tests as part of your unit test workloads.
Gordon:  As I've been writing about security over the last maybe six months or so ‑‑ and I've been doing a fair bit about it ‑‑ one of the things that's really struck me is the evolution in thinking about security.
I think we kind of came from a point where, on the one hand, you had people that were like, "Oh, clouds are insecure. We can't use clouds." Then, on the other hand, people would be like, "Oh. Well, we'll just use a public cloud provider, and we don't need to worry about security any longer."
You had these kind of extreme viewpoints, and I think it's actually good that ‑‑ from talking to people and reading things, and working through these deployments ‑‑ most people, I won't say everyone ‑‑ but most people seem to be thinking about security more intelligently and more thoughtfully.
David:  Yeah, and it's also one of the things that I see, too, is that in the past, in the Federal government, you would have maybe annual audits or these periodic audits where, "We're gonna see if we've drifted from our security baseline."
The reality is that your adversaries, they're not going to attack you once a year. They're attacking you multiple times a day. Being able to automate your scanning, and being able to make sure that you haven't drifted from your security baseline, and being able to rapidly snap back into it is really, really powerful.
That's where tools like the atomic scan tools that we've integrated into our OpenShift are really compelling where we work with partners like Black Duck and Sonatype, even SCAP where we can do just DISA STIG for containers and make sure that they're locked down properly. It's really, really exciting work.
Gordon:  You've mentioned automation. Let's talk a little bit more about automation because, from what I've been seeing, automation is really the heart of how a lot of these organizations are evolving. They're really starting to think about, "What can I automate next? What's the next low‑hanging fruit that I can basically...don't have to worry about any longer?"
David:  Yeah, and that's where, what is it, people spend 80 percent of their budgets on keeping the lights on and that leaves 20 percent for innovation. But, there's a lot of time when you have these patch‑Tuesdays, and everybody's on this patching hamster wheel. It's like they spend all month patching and, before you know it, it's patch‑Tuesday again.
You're just doing this over, and over, and over again, and there's absolutely no time for doing any sort of innovation at all. That's where, if you can, automate things like security, automate your build processes. Whenever things can be automated, they should be automated.
There's an article that I wrote where I actually saw an interview that was done with Terry Halvorsen, who's the CIO of the DoD. He was giving a press interview, saying that the number one driver for data center consolidation in the DoD is labor costs and that, basically, automation is the key to help drive down those labor costs and if anything that can be automated should be automated.
That really underscores that point of you really need to be able to automate as much as possible if you want to do any sort of innovation.
Gordon:  That's really just the cost side of things. In areas like security, for example, you can really increase the quality because not only is it taking you less work to do these manual repeated tasks, but if it's automated you can be pretty sure that it's going to happen the same way the hundredth time that it happens the first time. You're not going to make a mistake in there that creates a vulnerability for an attack.
David:  Yeah, and your checks could be a lot more robust and a lot richer, too. If I had a human that is locking down a system, there's only so many checks that that human can do per hour.
But, if I can make it machine readable, where I'm using tools like SCAP or I'm using tools like Ansible that can just go through, and I can have a lot more rules and a lot more checks and have this defense in depth.
Gordon:  Let's switch gears a little bit here to talk about policy. One of the really big changes in the last few years has been the fact that government, at multiple levels, is really starting to think about open source systematically and, in some ways, perhaps embracing it more systematically than many private organizations.
David:  It'll be 10 years for me in February, when I joined Red Hat. I remember 10 years ago I would go into meetings and people were wondering if this whole open source thing's going to take off to now, to the point where, back in the day, open source was the insurgent, now it's the incumbent, where people in the government are huge consumers of open source.
We're proud to say that every tactical vehicle in the US Army is running at least one piece of open source software from Red Hat. You can go down the line with every agency. All 50 states are running Red Hat products or using open source technologies in a commercially supported way. I think that the pendulum is even swinging further from being a consumer to being a contributor and a collaborator.
We've done a lot of work as part of the open source community with the SCAP Security Guide where we've partnered with NSA, and DISA, and NIST, and all kind of other integrators, and government agencies, and folks from academia to do security baselines in an open source way. That has been very exciting to be able to come out with security baselines a lot faster than doing it yourself.
Also, the other thing that I'm seeing, too, is that the White House just released the OMB open source policy guidance where they talk about all of the custom‑written code and that the government pays for. First off, it should be reusable by all of the agencies.
They also have the same goal over the next three years to open source 20 percent of that code and then do an analysis to see if this is working out well and all that. It was really neat to see the evolution of the draft policy come out in the final policy where all of that glueware that the government is paying government employees or integrators to implement.
They really want to reuse that as much as possible instead of reinventing the wheel over and over again. To me, that's really exciting.
Gordon:  Yeah, and, of course, a lot of the new policies even go beyond open source in terms of having open data, in terms of research that's paid for with taxpayer money, should be publicly available and so forth. Obviously, there's still a lot of work that needs to go into many of those areas, but it's certainly trending in a good direction.
David:  No, absolutely. I'm really excited by it.
Gordon:  If somebody wants to learn more about what Red Hat's doing in government, what the government itself is doing in open source, how they can get involved, what's one or two good next steps they can take.
David:  I think one of the things that they should do is check out the Red Hat Government Symposium. If people go to redhatgov.com, that's a short link to get to the registration site for that. That's our annual even that we have every year in DC. This year it is on November 2nd at the Ritz‑Carlton in Pentagon City.
This is going to be really exciting where, if you think about it, the following week is the presidential election. We have the open source policy that came out. There's going to be a lot of people wondering what's going to happen over the next 12 months and how policies that are in place now will evolve over time.
It's going to be a great opportunity to network with folks where we're going to have Mike Hermus, who's the CTO of Department of Homeland Security, is going to give a keynote. We're going to have a lot of executives from Red Hat giving keynotes, like Tim Yeaton and Ashesh Badani. I'm really excited about the events that are coming out. Please, come check that out.
Gordon:  That's great, Dave. I just find it so interesting. The government often gets this reputation for being kind of a decade behind everyone else. In a lot of respects an open source policy, open data policy opened organizational openness in general. The government, in some ways, I think is ahead of a lot of the private sector.
David:  I wouldn't argue that. A concrete example of that is the SCAP work that we've been doing as part of the SCAP Security Guide. SCAP was something that was started by NIST, the National Institute of Standards in Technology. There are a lot of commercial organizations like Microsoft, and Red Hat, and others that got along to come up with SCAP policy that's machine readable.
I remember going back to our engineering organization and saying, "You know, we got to get this inside of our products," and we get them saying, "Oh, no. The addressable market for that is just government nerds."
Now it's to the point where people are developing PCI compliance policy as part of the SCAP Security Guide. We have contributions the world over. From what I understand, Lufthansa will run an SCAP scan every time they turn their planes on with the in‑flight entertainment system. It's really exciting to see that type of change moving on.
At the Red Hat Summit, over the past couple years, we would do SCAP sessions where Shawn Wells, who would give the presentation. He would pull the audience over the last couple years. It's like, "OK, how many people are from commercial and how many people are from Public Sector?"

A couple years ago it was like 80 percent Public Sector, and this year the poll was 85 percent commercial. It's really interesting to see how a lot of this innovation that has happened in government has actually made it for the benefit of private industry, which, to me, is a really good use of taxpayer dollars.