Tuesday, November 15, 2016

Data and DevOps with Splunk's Andi Mann

AndiMann HiRes

Andi Mann is Chief Technology Advocate at Splunk. In this podcast, he discusses some of the ways in which data plays an important role in DevOps. I’ve known Andi for ages since we were both IT industry analysts and we had a chance to sit down at CloudExpo/DevOps Summit/IoT Summit in Santa Clara where Andi was chairing a DevOps track and I was one of the speakers. (We also did a data and DevOps panel together on the main stage but that video doesn’t seem to be up yet. I’ll post once it is.)

Among the topics we tackle are choosing appropriate metrics that align with the business rather than just technical measures, creating feedback loops, using data to promote accountability, and DevSecOps.

Show notes:

Listen to MP3 (22:13)

Listen to OGG (22:13)


Gordon Haff:  I'm sitting down here with an old analyst mate of mine, Andi Mann. Also, formally of CA, also the author of some books, and now he is the chief technology advocate with Splunk. What we're going to talk about today is data in DevOps. Welcome, Andi.

Andi Mann:  A lot of the customers I talk to, who are doing DevOps in various versions using Splunk...It boils down to three key areas is that they really want to know about, the metrics that matter for them.

The first is really about how fast are they? What's their cycle time? How quick does it take for an idea to get in front of a customer? How long does it take someone in business to come up with something and then basically make money from it, or, in government, they service their citizens with it? That cycle time is really important, the velocity of delivery.

The second key area that people look at is around the quality of what they're delivering. Are they doing good? Are they delivering good applications? Are they creating downtime? Are they having availability issues? Is one release better than another?

The third area is really around what sort of impact do they have? Measuring real business goals, MBOs, things like revenue and customer sign‑up rates, and cart fulfillment, and cart abandonment. These sorts of things. Those are the metrics that my customers, the people I talk to are interested in for DevOps, closing those feedback loops in those three areas.

Gordon:  One of the things I find interesting, what you just said, Andi, is that you read these DevOps surveys, DevOps reports, and often the metrics, or at least what they're calling metrics, are framed in much more technical terms. How many releases do we have per year, or per week, per hour?

What's the failure rate? How quickly can we do builds? How quickly can we integrate? Which, I think to your point, are probably worth measuring, but they're really...The ultimate goal of DevOps is not to release software faster.

Andi:  Exactly. It's interesting because you do look at these metrics in isolation, and they matter. All this matters. 10 deploys a day, we all know that from 2009 in Velocity. That matters, but 10 deploys a day is no good if they're all bad deploys. You need to measure quality in that.

But even if it's a good quality deploy and you do it quickly, if it's not moving the needle on what your business wants you to be doing, then again, it doesn't matter. I think it's actually really important to connect these together so you really are getting metrics, correlating metrics, that matter across the whole range to really understand whether you're doing good or not.

Gordon:  One of my favorite Dilbert cartoons, I don't remember the exact wording but to the effect of...Pointy Hair goes, "We're now going to measure you on the number of lines of you code you write," and Wally says, "I'm going to off to write myself a new car today."

Andi:  [laughs] Yeah, exactly. That's one of the things that I actually do measure. We measure it internally. A bunch of our customers do actually measure code volume. There's a couple of interesting reasons for that. Especially in a DevOps and Agile mode, actually delivering too much code can be a signifier that you're doing things badly.

You're writing too much code, you're doing too much in one release rather than doing small, iterative releases. It can also signify that one person has too much of a workload. When you think about DevOps and the concepts around empathy and wanting to make sure that life doesn't suck for everyone, when one person is doing all the work, that sucks for them.

There are actually good things that come out of measuring code volume [laughs] but saying that more code equals better code, equals a bonus? That's a really bad thing. [laughs]

Gordon:   I think a lot of people tend to lump data metrics into this one big bucket. As we've had discussions before, there are these business metrics which have to be somehow connected to things.

It's not clear that overall company revenue is necessarily a good DevOps metric. Some of the other things you mentioned certainly are. In many cases, it does make sense to collect a lot of underlying data for data analytics and things like that. Then, you also have alerts.

Andi:  Yeah, the business stuff is really interesting. I know one of our customers releases the software-as-a-service. They're a SaaS company, cloud native and all that. Their developers actually do care about who uses specific features.

They'll implement a feature. They do canary releases. They'll implement feature on 10 out of a 1,000 servers, or whatever. Certain volume or percentage of their customers will get access to it. Then they'll measure using Splunk the way that those features are being used or not. They also measure the satisfaction of those customers.

They've got these nice smileys, and tick marks, and stuff that say, "Yes, I enjoyed using this feature." They can correlate that together, and it actually means the next day after doing a commit, after doing a release, they actually know whether the business use cases being satisfied, which is very cool.

I know a television company in the UK that we work with. They actually send reports on a weekly basis, I think it is, to their marketing department, based on whether users are using the website, what they're doing on the website, whether they're clicking through on competitions.

That's actually really important, but obviously mostly what people are doing in using data and the feedback...Closing the feedback loops is what I'm talking about here at DevOps Summit.

They're closing the feedback loop around those technical measurements.

Am I creating more bugs? Am I creating availability issues? Am I creating problems with uptime? Am I closing out the feature set that is in the story or in the epic that I was promising to do? Partially, it's also around this accountability to each other. Am I doing what I'd promised I'd do?

Gordon:  Talk a little more about accountability.

Andi:  Yeah, that's one of my soapboxes at the moment. I see a lot of the empowerment that DevOps gives developers to make decisions. I think that's great, especially in companies where you've got systems thinking and they understand their role in the organization and what it means to deliver good outputs for their customers.

You give them a lot of responsibility. Their manager is the leader. You give your developers, and your operations team, and those DevOps professionals a lot of responsibility and lot of empowerment to do the right thing.

Also, I think that there's a need for them to be accountable for doing the right thing as well. Especially as DevOps grows in larger organizations and there are more and more people involved. Also, with the concept that DevOps is about helping and making sure that each other is having a good experience at their life and their work.

As a developer, you're not making sure that operators are getting called out late at night, and all this sort of stuff. If DevOps is about helping to work with each other, to collaborate, to communicate better, to make sure each other's lives get better as Dev and Ops professionals, then I think you need to be accountable in two ways.

You need to be accountable to your business, which often means being accountable to your manager for doing the work that you're meant to do, and doing the work you promised you would, within the bounds of the responsibility you've been given.

It's also being accountable to each other, from doing good work, and doing the right work in ways that helps your whole team move forward, and makes everyone else's life positive. I think we talk a lot about empowerment and enablement. We don't really talk much about the flip side of that, which I think is that accountability.

Gordon:  I think the culture talk around DevOps, and we did have lots of discussions around culture and some of the ways that it can be overextended and over‑applied. Yeah, it can turn into this "don't fear failure," empathy, transparency, etc. Unicorns farting rainbows. This very touchy feely, everyone's happy and sings "Kumbaya," but you are, at the end of the day, being paid to produce business outcomes.

There does need to be some accountability there. If you crash the SQL server three weekends in a row, and call in Ops, somebody's going to have to talk with you, as they should.

Andi:  Exactly. Especially when you talk about the DevOps toolchain and the life cycle of software. It's a very complex and opaque theme to try to see what's going on at every stage, especially if you're a manager who's not necessarily fully fluent in specific tools. They can't dig into the specific tools to have a look at that.

I think reporting up to your management and reporting to each other and saying, "I introduced these bugs and I'm sorry for it. I won't do it again." By the same token, "I introduced these newest features, and they were really successful. We should all celebrate that as a team."

I think that accountability is actually really important. You'll see this in manufacturing as well where we get a lot of our examples from. You'll see that if one person makes the same mistake several times, then they'll get into a training program, or they'll get different mentoring.

Maybe they'll move into a different part of the line where they're better suited, and their skills are better suited. You don't know how to make your team better if you're not being accountable to each other, and to your management.

That's, I think, something we've got to step up to as DevOps professionals for want of a better term, is how do we be accountable to each other, and to the company that pays us as you said to do the job?

Gordon:  You just talked about manufacturing. You just mentioned quality, and I think that's a pretty good segue because we often think about DevOps primarily, well, through the lens of developer for one thing, but that's another topic for another day.

We also tend to view DevOps, first and foremost, through the lens of this velocity, business agility, and so forth, but there is a very important quality component there as well. What are some of the ways that data can help to surface that quality component?

Andi:  Absolutely. Some of the things we're looking at ‑‑ and our customers are doing a lot of this at the moment ‑‑ is looking at areas like code coverage and tests, number of defects, defect rates per release. Looking at the aggregating and correlating the quality metrics out of multiple test and scanning tools.

Doing static analysis and looking at the defect rates, doing dynamic analysis, and then also looking at the defect rates, as well as application performance and health scores. Looking at the performance in terms of resource utilization, response time, availability, execution failures, and so forth.

Comparing current release in production with next release just about to come forward, and being able to run that over time, so you can see whether you're making quality improvements over time.

If you're able to actually give your application a health score, and then you can measure that not just in production, but also in staging, or pre‑prod, whatever you want to call it, then you can start to make sure that you're getting better with every release. Your quality is going up with every release.

You can do with actual data, real measurements, so coming out of these testing tools, as well as coming out of actually running that in a stood up environment. There's lots of feedback loops you can close there.

Once you start to find problems as you find them especially in production, but also in pre‑prod and staging, feeding those things back into the test cycle so that you never find the same mistake twice, because the first time you find it, the next time you'd test for it.

Gordon:  This idea of doing things incrementally in stages, before they hit production, is really important from a security perspective as well. I was just having a conversation with one of my colleagues, or actually several of my colleagues, about this kind of tension between the traditional security guy who is sort of, "Stop. Stop. Don't push it out there," and this idea of whether you like the term or not, DevSecOps, where security gets baked in, and added incrementally.

What we were saying, and what was really coming out as we were having this discussion was that while the reason there's this tension or maybe disconnect is from the security guys' point of view, to a degree, the serious security flaws are pushed out into production.

Well, that is something that simply needs to be stopped to a degree that you can tolerate failures and errors in security that don't hit the actual production environment, because you found them through automated testing, or whatever. Then that makes more sense as this incremental, and sometimes breaking things sort of process.

Andi:  Yeah. Absolutely. This is actually something I've done a little bit of work, and most of the work is being done by someone that you probably know well, Ed Haletky of the TVP, @Texiwill on Twitter.

He's done a bunch of work, and I've put my two cents worth, and it's probably worth maybe one. Looking at security, and security testing, pin testing, code quality testing, so finding things like potential SQL injection, these sorts of things.

Also using some of those tools like Fortifywhich will do quality of code scanning for security purposes. You can start to shift left in that respect, but also continuing to get inputs from security testing even post release. There's no reason why security testing can't keep going even after you've released.

You can get to a certain coverage rate. This is where data helps. You get to a 90 percent, or a 92 percent, or a 95 percent coverage rate, or confidence level if you will. You go, "OK, I'm ready to release. I know that the remaining five percent is potentially low impact, or low risk. I'll put it out there anyway, but continuing to test."

There's some really interesting work out there that Ed's published about cloud, cloud promotion, and cloud delivery that actually really focuses on using these metrics from security testing, both pre and post release, which I think is actually really important.

Gordon:  We're going to be hearing a lot more about this whole security angle everywhere. This is partly an IoT show. We've heard a lot about security. I'm not sure we've heard a lot of solutions, but we've heard a lot about security.

Obviously, it is a big part of the DevOps discussion. It's a big, scary world out there, and it's pretty universally recognized that having an auditor sign off once a year, and then you don't think about security for that application for another six months or whatever, really doesn't work today.

Andi:  Yeah. It's not my joke. I saw someone post it the other day. "What did you get owned by? Your toaster or your fridge?" It's so true, especially in IoT, but in a DevOps perspective, or DevOps context, being able to do that continuous security testing, I think, is really important, and bring security a shift left.

We talk about a shift left in all sorts of other areas, and we're doing it with QA which I think is awesome. We need to start doing it more with security, I believe. At Splunk, we do have a whole security practice around incident event monitoring, or unused behavior analytics. Being able to start to apply some of that in the test, and pre‑prod and staging environment I think is really important.

Being able to do some automated audit reporting around what is happening, penetrations, security violations, or passwords, or PII exposure, potential hard coded passwords, stuff like that, there's a bunch of stuff that developers could be, and should be responsible for that actually make security pro's life easier. Not harder. I think there's a lot of work yet to be done on that.

Gordon:  Absolutely. I'd go back to DevSecOps. I think there's this school of thought that, well, if you read the Phoenix Project properly, you wouldn't be having to have this discussion. Know security was baked in. Meanwhile in the real world, security has tended to be this separate profession.

We were both at DevOpsDays London. I still remember security professional I guess in his 40s, standing up in an open space, and go, "I'm one of those security guys who's been getting in your way. You know, this is the first time I've ever been to an IT conference that wasn't purely a security conference."

I love that story. Certainly not to pick on that guy. That's quite brave of him, getting up like that. I think that's such a perfect illustration of how security has operated in his own world as this gatekeeper to releasing applications.

Andi:  Yeah. People joke about IT being a department of no. Security has that moniker for fear or not. Obviously, security teams are just looking out to protect the business. That's their job. Having them in the tent, I think, is a better option, and we started to bring other teams into the tent of DevOps.

I actually gave a presentation. You can find it online at the Splunk user conference, that was titled something along the lines of "Biz PMO Dev Sec QA Biz Ops," or something crazy like that, about broadening the tent of DevOps.

Security's got to come into this tent. Being a security pro into your team, into your scrum, that's got to be a good start, doesn't it?

Gordon:  Right, even if they're not in the meeting in the stand‑up every week, or every day, at least having them be as part of the team. Just like there used to be a business analyst who's a part of the team. Our product and technologies operations, their DevOps story.

I call it the "Banana Pickle Story," because they would get asked for a banana, and as Katrinka describes it, six months later, they deliver this pickle. Really, their DevOps story...Again, the business level, because it's what matters to me.

They used a lot of technology like OpenShift, Platform Service, and Ansible for automation , things like that. But again, they were really focused on the business story of how do we get the stakeholders iterating with us. "Oops, that banana's looking a little green. Let's dial that back to yellow and get on with the other things."

Andi:  Yeah, and this is the agile model for development, is getting someone from the business...You're creating an MVP and getting someone from the business to evaluate it, and continue to iterate with their advice.

You know that you're creating the right thing as you're creating it, rather than finding out in six‑months time, that you've created a pickle, instead of banana. [laughs] I love that analogy.

We should be doing that more and more with security. If security is saying no to you all the time, then maybe you're not inviting them to the party as much as you should, so that they can say yes iteratively, rather than one big no at the end.

Gordon:  Right. Just to cap off this podcast. In order to prove to security, much less external auditors and to prove to these other stakeholders, you need data.

Andi:  Absolutely. Exactly right. This is fundamental to what I believe. We cannot continue making decisions based on "I feel that this is the right thing to do. I think we're going to have good results here." We're living in a society that's driven by data and facts. Especially as developers or IT professionals, we need to have these feedback loops based on real data.

Not just people coming back and saying "I don't feel like you did the right thing. I don't think that this was good. I think our release worked and helped our customers." We need to come back and stop having these back‑and‑forths over opinions.

There's some very crude statements about, "Everyone's got opinions," right? I like to say, "In God we trust. All others bring data." That's how we get these real feedback loop in a system's mode, getting feedback from productions systems, from customer interaction, from the security violations and the passes that we do make.

From the coverage, to know if we are doing the right thing in terms of speed, in terms of quality, in terms of impacting our business, that's where data has a huge role to play. It’s those feedback loops that DevOps depends on.

Wednesday, November 09, 2016

Podcast: Open source ecosystems with Red Hat's Diane Mueller

Dianemueller 1378481891 37
Traditionally, in open source, there was a lot of emphasis on singular projects. Today, it's much more about how multiple communities interact and build on each other. In this podcast recorded at the OpsenShift Commons Gathering and Kubecon in Seattle, Red Hat's Diane Mueller discusses what she's learned as Director for Community Development at OpenShift and what's coming next.
Show notes:

Listen to MP3 (17:39)
Listen to OGG (17:39)

Tuesday, November 08, 2016

Presentation: Optimizing the Ops in DevOps

As DevOps practices have been put into wide use, it's become evident that developers and operations aren't merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations - together with other practices such as security - to collaborate and coexist with less overhead and conflict than in the past.

In my session at @DevOpsSummit at 19th Cloud Expo, I discussed what modern operational practices look like in a world in which applications are more loosely coupled, are developed using DevOps approaches, and are deployed on software-defined, and often containerized, infrastructures - and where operations itself is increasingly another "as a service" capability from the perspective of developers.

How does the operations tool chest change? How does the required skill set differ? How are the interactions between operations and other IT and business organizations different from in the past? How can operations provide the confidence to the entire organization that this new pipeline is still delivering non-functional requirements such as regulatory compliance and a secure and certified operating environment? How does operations safely consume vendor and upstream dependencies while meeting developer desires for the latest and greatest?

Operations is more important than ever for a business to derive value from its IT organization. But the roles and the goals of operations are significantly different than they were historically.

Tuesday, November 01, 2016

The state of Platform-as-a-Service 2016

if you're ignoring PaaS because early offerings didn’t meet your needs or because you’re more focused on operations than developers, you should look again. It enables ops to enable developers efficiently and to manage an underlying container infrastructure.

Circulating in drafts beginning in 2009, some variant of the NIST Cloud Computing definition used to be de rigueur in just about every cloud computing presentation. Among other terms, this document defined Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software

DevOps 1

-as-a-Service (SaaS) and, even as technology has morphed and advanced, this is the taxonomy that we still largely accept and adhere to today.

That said, PaaS was never as crisply defined as IaaS and SaaS because “platform” was never as crisply defined as infrastructure or (end-user) software. For example, some platforms were specific to a SaaS, such as Salesforce.

Others, specifically the online platforms that were most associated with the PaaS term early on, were typically tied to particular languages and frameworks. These PaaSs were very “opinionated.” For example, the original Google App Engine supported an environment that was just (and just almost) Python and Heroku was all about Ruby. Heroku's twelve-factor app manifesto was an additional type of opinion; write your apps this way or they won’t really be suitable for the platform. These platforms may not have been just for hobbyists, but they were certainly much more suited to developer prototyping and experimentation than production deployments. 

At the same time, platform was also used more broadly to cover the integration of a range of middleware, languages, frameworks, other tools, and architecture decisions (such as persistent storage) that a developer might use to create both web-centric and more traditional enterprise applications. Furthermore, such PaaSs as OpenShift remained not only “polyglot” but also allowed for an increasing range of deployment types both on-premise and in multi-tenant and dedicated online environments. (As well as on developer laptops using the upstream open source OpenShift Origin project.)

However, the various approaches to PaaS did have a common thread. They were bundles of technology that were largely framed as appealing to developers.

The developer angle was never the whole story though. Back in 2013, my Red Hat colleague Gunnar Hellekson talked with me about some of the operational benefits of a PaaS in government.

One of the greatest benefits of a PaaS is its ability to create a bright line between what's "operations" and what's "development". In other words, what's "yours" and what's "theirs".

Things get complicated and expensive when that line blurs: developers demand tweaks to kernel settings, particular hardware, etc. which fly in the face of any standardization or automation effort. Operations, on the other hand, creates inflexible rules for development platforms that prevent developers from doing their jobs. PaaS decouples these two, and permits each group to do what they're good at.

If you've outsourced your operations or development, this problem gets worse because any idiosyncrasies on the ops or the development side create friction when sourcing work to alternate vendors.

By using a PaaS, you make it perfectly clear who's responsible for what: above the PaaS line, developers can do whatever they like in the context of the PaaS platform, and it will automatically comply with operations standards. Below the line, operations can implement whatever they like, choose whatever vendors they like, as long as they're delivering a functional PaaS environment.

We spend a lot of time talking about why PaaS is great for developers. I think it's even better for procurements, architecture, and budget.

Today, with the rise of DevOps on one hand and containers on the other, it’s increasingly clear that a PaaS can be the sum of parts that are of direct interest mostly to developers and parts that are of direct interest mostly to operations. 

DevOps both leads to change and reflects change in a couple of areas. 

First is the number of tools that organizations are bringing into their DevOps (or DevSecOps if you prefer) software delivery workflow. Most obvious is the continuous integration/continuous delivery pipeline, most notably with Jenkins. But there are also any number of testing, source code control, collaboration, and monitoring tools that need to be integrated into the workflow. At the same time, developers still want their self-service provisioning with an overall user experience that’s tailored to how they work. A PaaS is an obvious integration and aggregation point for this tooling.

DevOps is also changing the way that developers and operations work with each other. Early DevOps discussions often focused on breaking down the wall between Dev and Ops. But this isn’t quite right. DevOps does indeed embody cultural elements such as collaboration and cooperation across teams—including Dev and Ops. But there’s also a recognition that the best form of communication is sometimes eliminating the need to communicate at all. To the degree that Ops can build a self-service platform for developers and get out of the way, that can be more effective than improving how dev and ops can work together. I don’t want to communicate more effectively with a bank teller; I want to use an ATM (or skip cash entirely).

Containers have also influenced how some organizations are thinking about PaaS. Many PaaS solutions (including OpenShift) have been based on containers from the beginning. But each platform did their own implementation of containers; in OpenShift it was Gears, in Heroku it was Dynos, in CloudFoundry it was Warden (now Garden) containers.  

As the industry moved to a container standard (Docker-format with standardization through the Open Container Initiative (OCI)), OpenShift moved with it. Red Hat has helped drive that movement along with many others though not all PaaS platforms have participated in the shift to standards. 

With container formats, runtimes, and orchestration increasingly standardized through the OCI and Cloud Native Computing Foundation (where kubernetes is hosted), there’s increasing interest from many ops teams in deploying a tested and integrated bundle of these technologies outside of any specific development environment initiatives within their companies.

That’s because the huge amount of technological innovation happening around containers and DevOps can be something of a double-edged sword. On the one hand it creates enormous possibilities for new types of applications running on a very dynamic and flexible platform. At the same time, channeling and packaging the rapid change happening across a plethora of open source projects isn’t easy—and can end up being a distraction from the ultimate business goals.

As a result, at Red Hat, we talk to customers who view OpenShift primarily through the lens of a container management platform rather than the more traditional developer-centric PaaS view. There’s still a developer angle of course—a platform isn’t much use unless you’re going to run applications on it. But sometimes there are already developer tooling and workflows in place and the pressing need is to deploy a container platform using Docker-format containers and kubernetes orchestration without having to assemble these from upstream community bits and support them in-house.

An integrated platform leads to real savings. For example, based on a set of interviews, IDC found that:

IT organizations that want to decouple application dependencies from the underlying infrastructure areadopting container technology as a way to migrate and deploy applications across multiple cloud environments and datacenter footprints. OpenShift provides a consistent application development and deployment platform, regardless of the underlying infrastructure, andprovides operations teams with a scalable, secure, and enterprise-grade application platformand unified container and cloud management capabilities.

Among its quantitative findings was 35 percent less IT staff time required per application deployed. [1]

In short, PaaS remains a central part of the cloud computing discussion even if the name is sometimes discarded for something more specific or descriptive such as container platform. What’s perhaps changed the most is the recognition that PaaS isn’t just a tool for developers. It’s also a way for ops to enable developers most efficiently and to manage the underlying container infrastructure.

[1] I’ve got some other good data points and outside perspectives that I’ll share in a future post.

Saturday, October 29, 2016

Cape Cod: Fall of 2016

Nauset Beach, Cape Cod

It’s been a few years—partly because I’ve tended to be doing a lot of October travel of late—but the stars aligned again for a Cape Cod trip this fall. We’ve pretty much settled on early October for this trip when we do it. The summer mobs are gone but the Cape is still mostly open for business and the weather is typically still nice.

We tend to stay in Eastham or Wellfleet on the lower Cape (i.e. further out on the Cape). This gets you out to where the interesting beaches and other outdoor attractions are, is easily accessible to Provincetown, but is removed from the general craziness and higher prices of P’town. Of course, if you’d rather ditch your car and hang out, P’town would be the better choice. We’ve stayed at various motels near the bike path (Cape Cod Rail Trail), though I didn’t get onto my inline skates this time. The Even'tide in Wellfleet has proven to be a good choice. I’ve also stayed at the Town Crier in Eastham (next to Arnold’s, a well-known “clam shack” in the area). There are a fair number of choices along the main drag (Route 6).


Upscale casual, but reasonably priced, is the Wicked Oyster in Wellfleet, near but not on the harbor. This is the second time I’ve eaten there and it’s probably been the best meal I had on both trips. This year I had a lovely pan seared cod over a light cream broth with littlenecks, sunchokes, leeks and bacon. My dinner companion had seared scallops over a sweet pea, artichoke and goat cheese risotto, topped with pea tendrils. Both were delicious—fresh and perfectly cooked. My clam chowder starter was also excellent; I’d have been tempted by raw Wellfleet oysters but, alas, the beds were temporarily closed due to a norovirus outbreak. The one comment I would add is that, current oyster issues aside, the Wicked Oyster is light on oyster dishes given its name and location. So don’t go in with your heart set on a bunch of oyster eating options.

Wellfleet, Cape Cod

The Bookstore and Restaurant is on the harbor—although the early October weather was such that it was a bit chilly to sit outside and fully absorb the view. I decided to have a couple of generously sized appetizers: a dozen raw littlenecks followed by an Oyster stew. My friend started with the Oysters Rockefeller followed by the Cranberry-walnut crusted baked cod. It was all good—even if Antoine’s in New Orleans has forever spoiled me for anyone else’s Oysters Rockefeller. A couple of their seafood stews looked very inviting as well. I just didn’t have the appetite that night for what looked like very generous portions.

Finally, on our P’town day, we ate at Ross’ Grill, which was fine. The views of the harbor were great and the food was above average. The calamari appetizer had a nice light batter that didn’t get in the way of enjoying the thick calamari slices. The soy-ginger sauce was sort of thin but an improvement over the marinara/cocktail sauce one often gets. And I enjoyed the roast duck special with a nice berry sauce. My dinner companion enjoyed her seafood stew less though, by her own admission, it just wasn’t really what she expected. (She had in mind one of the creamier stews at the Bookstore rather than a fairly traditional Portuguese Fish Stew with lots of tomato and a thin broth.) It probably didn’t help that our waiter was a bit perfunctory and didn’t volunteer a whole lot of information in response to questions. 


As an aside, I’ll note that we ran into something this time that was unfamiliar from pervious visits. The tide was high around mid-day and these were astronomically higher tides. As a result, for one of hikes, we had to shift our timeline and the other was just challenging. Water shoes might have helped in one case but these are big tides on often very gently-sloping sands.

The following are a few favorite places, both from this trip and prior ones.

Wellfleet Bay Wildlife Sanctuary - Mass Audubon. This is one of Mass Audubon’s better sanctuaries. (I also recommend Ipswich and Wachusett Meadows.) It’s 937 acres with 5 miles of trails. A boardwalk brings you out to the ocean. It even has one or two prickly pear cactus plants, believe it or not!

Great Head hill, Cape Cod

Great Island in Wellfleet is one of the longer hikes on the Cape. It’s eight miles or so depending upon the route you take and how far you go (and how far the tide lets you go). There are a couple of monuments on the island, an old tavern site (though in typical New England manner there’s not much left to see other than a cellar hole), shore birds, and (mostly) beach walking. Don’t get stuck out at Jeremy Point by the high tide but it’s also difficult to even get out to the tavern site at least an hour or so either side of high tide.

Wood End and (optionally) Long Point Lighthouses from Pilgrims First Landing Park in Provincetown. There’s some limited free and unmetered parking at the small rotary. Otherwise you’ll need to head toward the town and find a parking lot.  A long breakwater composed of granite blocks connects the small park to the spit of land with the lighthouses. Walking over the breakwater is mostly straightforward but a few sections get covered at high tide and others get a bit rough (i.e. I wouldn’t personally try it in sandals). The breakwater takes you to near Wood End Light. You can extend the hike by walking over to Long Point Light.

Race Point. Cape Cod, MA

Another hike, which I didn’t do this year, is Race Point Beach and Race Point Light. (Now privately owned. Tours are sometimes offered and the keeper’s house is available for rent.) I don’t really remember the details of hiking it—though I do remember the lighthouse and beach, but this says it’s about 8 miles.

The Marconi Wireless Station Site is in Wellfleet. Not many original artifacts remain but it’s worth a visit even if some of the interpretive displays were removed a few years ago and the main shelter removed because it was going to eventually fall off the cliff. Also nearby are Marconi Beach and the Atlantic White Cedar Swamp Trail, a particularly nice excursion for a rainy day or if you just want a short walk before you drive home. 

Coast Guard Beach in Eastham also comes recommended. A shuttle runs there through Labor Day. However, be aware that after the shuttle stops running, there is a very limited parking area next to the old Coast Guard Station and no other options remotely close. So, in the somewhat off-season, it can be difficult to park there. 

Also nearby are Nauset Light and the Three Sisters, older lighthouses from the area that have been reunited away from the ocean. 

Thursday, October 13, 2016

Open source and OpenShift in government with Red Hat's David Egts

Red Hat's Chief Technologist for the North American public sector, David Egts, sat down with me to discuss some of the trends he's seeing in the public sector. In addition to being a podcaster himself (The Dave and Gunnar Show), David has years of experience working with government and related public sector organizations at all levels. In this show, he shares some of the trends he's been seeing around open source (such as the White House open source policy), the collaboration around OpenSCAP, how OpenShift is being used to manage containers, and the upcoming Red Hat Government Symposium in Washington DC.

Show notes:

MP3 audio (18:01)
OGG audio (18:01)


Gordon Haff: Today I'm joined by David Egts who's the Chief Technologist for the North American Public Sector at Red Hat. He's going to have some great insights to share with us about how government, at various levels, is adopting cloud and container technology.
Welcome, David.
David Egts:  Hey, Gordon. Glad to be here. A big fan of the show, so it's great to finally be on it after all the episodes I've listened to. Thanks for having me.
Gordon:  I should mention at this point, and we'll have a link in the show notes, that David is the co‑host with Gunnar Hellekson of his own podcast. Tell us a little bit about your podcast.
David:  It's "The Dave and Gunner Show." If people go to dgshow.org you could hear the podcast where I interview a bunch of people in the open source community, people at Red Hat.
A lot of the time Gunnar and I will just get on and we'll just talk about the tech news of the day, and parenting, and all kind of other fun things like that. I do have to admit, though, the podcast wouldn't exist if it wasn't for yours being the inspiration to get things going, so thank you for all the work you've done.
Gordon:  Thanks, David. We're going to talk about a number of cloud, and government, and policy things on this show, but let's start talking about something specific. Namely, that's container adoption in the government, specifically around Red Hat OpenShift.
David: In Public Sector, OpenShift interest taking off like crazy. I think the reason for it is that the folks in government that I've been talking to, when we talk about having a container strategy, they know they want to have one, but they often don't have the time or the resources to be able to roll their own container platform themselves.
They see all of this really hot innovation coming out of open source communities and all this hot software coming out of Silicon Valley from a lot of start‑ups. Then they see products like OpenShift Container Platform, which builds upon things like docker, builds on Kubernetes, and they see that as an integrated solution. They really are flocking to embrace it.
They're a bunch of customer success stories that we have that we can talk about that are really fun.
Gordon:  Let's get to those in a second. I did want to just make one point to your point about essentially making container adoption easy. This really is not just a government type of thing. We see this at a lot of customers who start out, "Whoa, if Google can do it themselves, we can do it ourselves, too." They go through an iteration and find this isn't really that easy to do.
David:  No, absolutely. Then also you end up building this snowflake that you can't put an ad in the paper and hire somebody to do this, or send them somewhere for training. You incur all this technical debt. Whereas, if you have an engineered solution that you can get training for or you could hire somebody for, it's really, really powerful.
A lot of people really focus on the mission of what they're working on.
Gordon:  Tell us some specific examples that you've been working on and that you can talk about there, out in the field.
David:  Yeah, one of my favorite ones. I actually did a podcast on The Dave and Gunner Show. We interviewed the Carolina CloudApps folks, the team at University of North Carolina. They're providing OpenShift as a service to all of the students, and faculty, and researchers at UNC.
It's really neat to see a bunch of the things that they're doing with, as far as container densities that they're getting. They're running over a hundred apps per container host. Where, if you think about that in the traditional virtualization base, getting like a 10:1 ratio of virtualized systems per hypervisor was great, but to get 100:1 is just amazing.
Then there are other things, too, as far as the range of people that they have to work with where it's like 18‑year‑old students that are just brand new freshmen to people approaching their retirement years in the faculty.
Being able to come up with documentation, and building a community, and getting people to adopt the software in a very easy way was a really neat challenge for them, which I thought was pretty amazing. Then the last thing that I thought was really neat was the whole thing.
For any sort of IT organization, you need to be very, very compelling or risk being replaced by Shadow IT by providing something like a container platform, like Carolina CloudApps does.
That allows them to be really relevant and deliver a lot of value to the students, and faculty, and the researchers to prevent them from even considering going with something from a third party or spinning up something in your dorm room.
Gordon:  What are some of the lessons that you would say that you've learned, that Red Hat's learned, that the customers have learned as we've gone through this process of what's rather a new set of technologies?
David:  I think security is one of the big things that I've found out. Just because people are moving into containers and you're sticking everything into a container, the security burden shifts from being mostly the responsibility of the operations team to being a shared responsibility between the development and the operations team.
You can't just flip a container over the wall, hand it to ops, and then have it go into production. It can't be these black box containers you give over. You need to move some of that security discipline over to the development side, so in the CICD processes the same way that you do unit tests to make sure that your code behaves properly.
You also want to do security tests as part of your unit test workloads.
Gordon:  As I've been writing about security over the last maybe six months or so ‑‑ and I've been doing a fair bit about it ‑‑ one of the things that's really struck me is the evolution in thinking about security.
I think we kind of came from a point where, on the one hand, you had people that were like, "Oh, clouds are insecure. We can't use clouds." Then, on the other hand, people would be like, "Oh. Well, we'll just use a public cloud provider, and we don't need to worry about security any longer."
You had these kind of extreme viewpoints, and I think it's actually good that ‑‑ from talking to people and reading things, and working through these deployments ‑‑ most people, I won't say everyone ‑‑ but most people seem to be thinking about security more intelligently and more thoughtfully.
David:  Yeah, and it's also one of the things that I see, too, is that in the past, in the Federal government, you would have maybe annual audits or these periodic audits where, "We're gonna see if we've drifted from our security baseline."
The reality is that your adversaries, they're not going to attack you once a year. They're attacking you multiple times a day. Being able to automate your scanning, and being able to make sure that you haven't drifted from your security baseline, and being able to rapidly snap back into it is really, really powerful.
That's where tools like the atomic scan tools that we've integrated into our OpenShift are really compelling where we work with partners like Black Duck and Sonatype, even SCAP where we can do just DISA STIG for containers and make sure that they're locked down properly. It's really, really exciting work.
Gordon:  You've mentioned automation. Let's talk a little bit more about automation because, from what I've been seeing, automation is really the heart of how a lot of these organizations are evolving. They're really starting to think about, "What can I automate next? What's the next low‑hanging fruit that I can basically...don't have to worry about any longer?"
David:  Yeah, and that's where, what is it, people spend 80 percent of their budgets on keeping the lights on and that leaves 20 percent for innovation. But, there's a lot of time when you have these patch‑Tuesdays, and everybody's on this patching hamster wheel. It's like they spend all month patching and, before you know it, it's patch‑Tuesday again.
You're just doing this over, and over, and over again, and there's absolutely no time for doing any sort of innovation at all. That's where, if you can, automate things like security, automate your build processes. Whenever things can be automated, they should be automated.
There's an article that I wrote where I actually saw an interview that was done with Terry Halvorsen, who's the CIO of the DoD. He was giving a press interview, saying that the number one driver for data center consolidation in the DoD is labor costs and that, basically, automation is the key to help drive down those labor costs and if anything that can be automated should be automated.
That really underscores that point of you really need to be able to automate as much as possible if you want to do any sort of innovation.
Gordon:  That's really just the cost side of things. In areas like security, for example, you can really increase the quality because not only is it taking you less work to do these manual repeated tasks, but if it's automated you can be pretty sure that it's going to happen the same way the hundredth time that it happens the first time. You're not going to make a mistake in there that creates a vulnerability for an attack.
David:  Yeah, and your checks could be a lot more robust and a lot richer, too. If I had a human that is locking down a system, there's only so many checks that that human can do per hour.
But, if I can make it machine readable, where I'm using tools like SCAP or I'm using tools like Ansible that can just go through, and I can have a lot more rules and a lot more checks and have this defense in depth.
Gordon:  Let's switch gears a little bit here to talk about policy. One of the really big changes in the last few years has been the fact that government, at multiple levels, is really starting to think about open source systematically and, in some ways, perhaps embracing it more systematically than many private organizations.
David:  It'll be 10 years for me in February, when I joined Red Hat. I remember 10 years ago I would go into meetings and people were wondering if this whole open source thing's going to take off to now, to the point where, back in the day, open source was the insurgent, now it's the incumbent, where people in the government are huge consumers of open source.
We're proud to say that every tactical vehicle in the US Army is running at least one piece of open source software from Red Hat. You can go down the line with every agency. All 50 states are running Red Hat products or using open source technologies in a commercially supported way. I think that the pendulum is even swinging further from being a consumer to being a contributor and a collaborator.
We've done a lot of work as part of the open source community with the SCAP Security Guide where we've partnered with NSA, and DISA, and NIST, and all kind of other integrators, and government agencies, and folks from academia to do security baselines in an open source way. That has been very exciting to be able to come out with security baselines a lot faster than doing it yourself.
Also, the other thing that I'm seeing, too, is that the White House just released the OMB open source policy guidance where they talk about all of the custom‑written code and that the government pays for. First off, it should be reusable by all of the agencies.
They also have the same goal over the next three years to open source 20 percent of that code and then do an analysis to see if this is working out well and all that. It was really neat to see the evolution of the draft policy come out in the final policy where all of that glueware that the government is paying government employees or integrators to implement.
They really want to reuse that as much as possible instead of reinventing the wheel over and over again. To me, that's really exciting.
Gordon:  Yeah, and, of course, a lot of the new policies even go beyond open source in terms of having open data, in terms of research that's paid for with taxpayer money, should be publicly available and so forth. Obviously, there's still a lot of work that needs to go into many of those areas, but it's certainly trending in a good direction.
David:  No, absolutely. I'm really excited by it.
Gordon:  If somebody wants to learn more about what Red Hat's doing in government, what the government itself is doing in open source, how they can get involved, what's one or two good next steps they can take.
David:  I think one of the things that they should do is check out the Red Hat Government Symposium. If people go to redhatgov.com, that's a short link to get to the registration site for that. That's our annual even that we have every year in DC. This year it is on November 2nd at the Ritz‑Carlton in Pentagon City.
This is going to be really exciting where, if you think about it, the following week is the presidential election. We have the open source policy that came out. There's going to be a lot of people wondering what's going to happen over the next 12 months and how policies that are in place now will evolve over time.
It's going to be a great opportunity to network with folks where we're going to have Mike Hermus, who's the CTO of Department of Homeland Security, is going to give a keynote. We're going to have a lot of executives from Red Hat giving keynotes, like Tim Yeaton and Ashesh Badani. I'm really excited about the events that are coming out. Please, come check that out.
Gordon:  That's great, Dave. I just find it so interesting. The government often gets this reputation for being kind of a decade behind everyone else. In a lot of respects an open source policy, open data policy opened organizational openness in general. The government, in some ways, I think is ahead of a lot of the private sector.
David:  I wouldn't argue that. A concrete example of that is the SCAP work that we've been doing as part of the SCAP Security Guide. SCAP was something that was started by NIST, the National Institute of Standards in Technology. There are a lot of commercial organizations like Microsoft, and Red Hat, and others that got along to come up with SCAP policy that's machine readable.
I remember going back to our engineering organization and saying, "You know, we got to get this inside of our products," and we get them saying, "Oh, no. The addressable market for that is just government nerds."
Now it's to the point where people are developing PCI compliance policy as part of the SCAP Security Guide. We have contributions the world over. From what I understand, Lufthansa will run an SCAP scan every time they turn their planes on with the in‑flight entertainment system. It's really exciting to see that type of change moving on.
At the Red Hat Summit, over the past couple years, we would do SCAP sessions where Shawn Wells, who would give the presentation. He would pull the audience over the last couple years. It's like, "OK, how many people are from commercial and how many people are from Public Sector?"

A couple years ago it was like 80 percent Public Sector, and this year the poll was 85 percent commercial. It's really interesting to see how a lot of this innovation that has happened in government has actually made it for the benefit of private industry, which, to me, is a really good use of taxpayer dollars.