Tuesday, August 12, 2014

Links for 08-12-2014

Thursday, August 07, 2014

Why the OS matters (even more) in a containerized world

Red Hat Project Atomic Introduction

My former colleague (and frequent host for good beer at events) Stephen O’Grady of RedMonk has written a typically smart piece titled “What is the Atomic Unit of Computing?” which makes some important points.

However, on one particular point I’d like to share a somewhat different perspective in the context of my cloud work at Red Hat. He makes that point when he writes: "Perhaps more importantly, however, there are two larger industry shifts at work which ease the adoption of container technologies… More specific to containers specifically, however, is the steady erosion in the importance of the operating system."

It’s not the operating system that’s becoming less important even as it continues to evolve. It’s the individual operating system instance that’s been configured, tuned, integrated, and ultimately married to a single application that is becoming less so. 

First of all, let me say that any differences in perspective are probably in part a matter of semantics and perspective. For example, Stephen goes on to write about how PaaS abstracts the application from the operating system running underneath. No quibbles there. There is absolutely an ongoing abstraction of the operating system; we’re moving away from the handcrafted and hardcoded operating instances that accompanied each application instance—just as we previously moved away from operating system instances lovingly crafted for each individual server. Stephen goes on to write—and I also fully agree—that "If applications are heavily operating system dependent and you run a mix of operating systems, containers will be problematic.” Clearly one of the trends that makes containers interesting today in a way that they were not (beyond a niche) a decade ago is the wholesale shift from pet operating systems to cattle operating systems.

But—and here’s where I take some exception to the “erosion in the importance” phrase—the operating system is still there and it’s still providing the framework for all the containers sitting above it. In the case of a containerized operating system, the OS arguably plays an even greater role than in the case of hardware server virtualization where that host was a hypervisor. (Of course, in the case of KVM for example, the hypervisor makes use of the OS for the OS-like functions that it needs, but there’s nothing inherent in the hypervisor architecture requiring that.)

In other words, the operating system matters more than ever. It’s just that you’re using a standard base image across all of your applications rather than taking that standard base image and tweaking it for each individual one. All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. 

To Stephen's broader point, we’re moving toward an architecture in which (the minimum set of) dependencies are packaged with the application rather than bundled as part of a complete operating system image. We’re also moving toward a future in which the OS explicitly deals with multi-host applications, serving as an orchestrator and scheduler for them. This includes modeling the app across multiple hosts and containers and providing the services and APIs to place the apps onto the appropriate resources.  

Project Atomic is a community for the technology behind optimized container hosts; it is also designed to feed requirements back into the respective upstream communities. By leaving the downstream release of Atomic Hosts to the Fedora community, CentOS community and Red Hat, Project Atomic can focus on driving technology innovation. This strategy encompasses containerized application delivery for the open hybrid cloud, including portability across bare metal systems, virtual machines and private and public clouds. Related is Red Hat's recently announced collaboration with Kubernetes to orchestrate Docker containers at scale.

I note at this point that the general concept of portably packaging applications is nothing particularly new. Throughout the aughts, as an industry analyst I spent a fair bit of time writing research notes about the various virtualization and partitioning technologies available at the time. One such set of techs was “application virtualization.” The term governed a fair bit of ground but included products such as one from Trigence which dealt with the problem of conflicting libraries in Windows apps (“DLL hell” if you recall). As a category, application virtualization remained something of a niche but it’s been re-imagined of late.

On the client, application virtualization has effectively been reborn as the app store as I wrote about in 2012. And today, Docker in particular is effectively layering on top of operating system virtualization (aka containers) to create something which looks an awful lot like what application virtualization was intended to accomplish. As my colleague Matt Hicks writes:

Docker is a Linux Container technology that introduced a well thought-out API for interacting with containers and a layered image format that defined how to introduce content into a container. It is an impressive combination and an open source ecosystem building around both the images and the Docker API. With Docker, developers now have an easy way to leverage a vast and growing amount of technology runtimes for their applications. A simple 'docker pull' and they can be running a Java stack, Ruby stack or Python stack very quickly.

There are other pieces as well. Today, OpenShift (Red Hat’s PaaS) applications run across multiple containers, distributed across different container hosts. As we began integrating OpenShift with Docker, the OpenShift Origin GearD project was created to tackle issues like Docker container wiring, orchestration and management via systems. Kubernetes builds on this work as described earlier.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, and much more lightweight. But they’re still running on something. And that something is an operating system. 

[Update: 8-14-2014. Updated and clarified the description of Project Atomic and its relationship to Linux distributions.]

Tuesday, August 05, 2014

Podcast: OpenShift Origin v4 & Accelerators with Dianne Mueller

OpenShift Origin v4 has a variety of new features including native .Net applications support and Puppet-based High-Availability deployments. There's also a new Accelerators program to mentor community members who want to speak about and run events related to OpenShift Origin.

Links:

Listen to MP3 (0:11:09)
Listen to OGG (0:11:09)

[Transcript:]

Gordon Haff:  Hi everyone. This is Gordon Haff, in the Cloud Product Strategy group at Red Hat. I'm sitting here at OSCON with Diane Mueller, who is the community manager for OpenShift Origin. Welcome, Diane.
Diane Mueller:  All right. Thanks again for having me, Gordon. I'm totally please to be here again with you, and I'm totally stoked about what we've just kicked out the door last week.
Gordon:  What did you guys kick out the door last week?
Diane:  It is release 4.0 of OpenShift Origin. OpenShift Origin, if you don't know it, is a platform‑as‑a‑service. It's an open source project that's sponsored by Red Hat. I'm here at OSCON talking about deploying it on OpenStack. What we're deploying right now is the new release, which has lots of great new features, and there have been some amazing community contributions.
This release includes support for .NET. That's like the word that never gets said inside of Red Hat, .NET support. Thanks to our friends at Uhuru Software, we now have enterprise production‑ready Gear support for Visual Studio. I have the demo. ‑‑ Oh, my god. I do now have a Windows box to do that on.
Uhuru Software did a great native .NET implementation, so we have that support now. The folks at Cisco, Daneyon Hansen, a big shout out to him. I did a whole bunch of Puppet‑based high availability deployment scripts, which have been incorporated by Harrison Ripps, who's on OpenShift as the technical lead for the open source Origin project.
He's incorporated them into install.openshift.com. Now, not only can you do very simplistic or very complicated deployments with install.openshift.com, but you can also do HA deployments, which is totally cool.
We added in central and consolidated logging support, zones and regions, placement policy extensibility, a node watchman service, all kinds of really cool things have been added into Origin release. You can get all of that if you go to openshift.com, or if you go straight into the origin.openshift.com site.
Check it out today. I really encourage you to do that, and give us your feedback. Origin, whether you're a developer using it, there's a lot of documentation there. If you're a system administrator, you're going to find lots of things to like in the new release.
We're really very proud of what we've done, and what the community's contributed to this release. It's been amazing. It's been an amazing ride, so that's been really, really cool.
Gordon:  What other new stuff is going on? That sounds like quite a bit by itself...
Diane:  [laughs]
Gordon:  ...but I know you've been working on some other things in your spare time.
Diane:  Yes. The other thing is, I don't scale. I found that out. I've been traveling a lot lately. Been down to Brazil, then to Europe, and all over the world, all over North America. Preaching the gospel of open source and OpenShift, and working and connecting all of the different parties.
What we've done is we've done sort of a riff on the Fedora Ambassadors program. We're launching, next week, the OpenShift Accelerators program. You get that car metaphor, gears and shifting, and accelerators. We're creating a program for mentoring people, to giving them all the tools that they need to set up user groups, locally.
We'll even give you money for pizza and swag. But this is not about swag. This is really about getting the skills to talk about OpenShift, to demo it. If you're interested in this program, you can go to origin.openshift.com/accelerator and see all of the prerequisites for joining.
There are a lot of people out there besides me and the Evangelist team that have given presentations. We're going to gather all of that, but it in GitHub, create speaker notes, create some good sample apps, and we're going coach people. Here at OSCON I got to mentor our very first accelerator Alex Barreto, who probably could have done without the coaching, but hey.
He's now prepped up to do presentations on OpenShift on OpenStack, so if you're looking for someone to speak on that topic you don't have to just call me. If you're looking to spin up a user‑group meeting, like Mateus Caruccio has done down in Brazil with the Getup Cloud. He's one of the contributors. They've flown up there.
Angel Rivera has hosted user group meetings. What we're really trying to do is scale the people who can go out and talk about OpenShift, and give them the tools to be more effective and, you know, some pizza money, and make sure that we coordinate all that with an events calendar, so that we know where everybody is and we can help promote those events.
If you're interested in this program, again, reach out to me on Twitter @PythonDJ, or go to the origin.openshift.com/accelerator page, sign up, and request a mentor. We would be happy to get you into the Accelerator program.
Gordon:  What's coming down the road, now that you've got this under your belt?
Diane:  There's so much going on. That's why the accelerator program is, all of the interrelated projects that OpenShift consumers, from Docker to Google Kubernetes to Project Atomic, there's so many different communities that we touch, the scaling is one of our biggest issues.
To be able to do a good job of educating people on all of these new technologies, and how they're being incorporated into OpenShift, and how OpenShift leverages them. If I have to be an expert on SE Linux, ActiveMQ, memchached, and Docker, and OpenStack, and ManageIQ, it just doesn't scale. My brain explodes when I start thinking about all the different topics that we get requests to talk on.
So this fall, stay tuned. There is going to be a huge riff of new technologies being brought into the OpenShift umbrella, and we'll have lots of things that you'll need to get up to speed on. So, we will be broadcasting that information out very shortly, and just keep in touch and keep listening to Gordon's podcast, because I'll be back here, again, very soon.
Gordon:  Yeah. I find it amazing, the last year or two in particular. Probably even just the last year, this explosion of technologies, approaches coming in. And everything touches everything else. I think containers, although it's not a totally new concept, Docker making containers more consumable. It's one of the really important changes that are happening in the Cloud space, and really PaaS is one of the things that drove that originally.
Diane:  Yeah.
Gordon:  And just all the orchestration associated with practically scaling up applications and groups of workloads, it's just an awful lot of stuff to absorb.
Diane:  And I think the beauty of it all, I think the reason why Red Hat succeeds in the spaces we have a very strong philosophy of, not invented here is not an option. Other organizations like Google and Kubernetes and Twitter and Mesos, and Docker are external from Red Hat. We contribute to them, and we collaborate with those communities, but we don't have to dominate them.
It doesn't have to come from within Red Hat to be incorporated into the OpenShift project. And we're really clear that you can't be, the only way open source works is if it's a collaboration. And so, often you'll hear me say "proudly found elsewhere", or PFE. And that's the way that I think Open Source really works, and the way the technologies really advance. And that's what PaaS brought to the table, was a value proposition for orchestration.
And what we brought with OpenShift, I think, was a great number of concepts that people have adopted. And now what we're seeing is some of those concepts being commoditized. And so rather than maintaining a wheel that's proprietary‑ish, even though it's open source, embracing things like Google Kubernetes and Docker, and the next iteration of OpenShift leveraging those.
It's not that it lessens the value proposition of OpenShift, what it does is it extends the community. We get to now say "Yeah, Google Kubernetes, they're working on OpenShift."
Gordon:  I probably should mention here, if we're scaring away any listeners, apart from you and I, my perspective, we need to know how this stuff all works underneath the covers, at least some level. But actually, one of the beauties of OpenShift ‑‑ if you use the online service or if you use OpenShift Origin, that a system admin type has set up ‑‑ is that you as a developer can really be abstracted from an awful lot of this.
Diane:  Yes. We're bandying about a lot of names of projects here. To put it in context, you use an Internet browser, you go to a web page, you do not know what JavaScript is. You do not know, hopefully, too much HTML5 or CSS. You just use it, you use the web, and from a developer's point of view, all of these technologies that are under the hood at OpenShift, they'll just use it. It'll get deployed, rolled out, managed, and auto‑scaled for you, as a developer. And from an administrator's or the SysAdmin's side, who's administering the platform‑as‑a‑service, those are abstractions as well. You're just managing the platform‑as‑a‑service, not all the pieces and parts. That's the value proposition of platform‑as‑a‑service.
Gordon:  Great. Lots of exciting new stuff. I look forward to digging into this myself.
Diane:  All right. Glad to be here and we'll be back again soon.

Gordon:  Thanks Diane. Thanks everyone.

Links for 08-05-2014

Friday, August 01, 2014

Links for 08-01-2014

Podcast: Software-defined Networking with Red Hat's Dave Neary


Software-defined compute has been around for a while. Software-defined storage is newer but it's gaining visibility and market presence too. Software-defined networking is next in line. Red Hat's Dave Neary talks about what it is and why it's important for delivering capabilities such as VOIP and for replacing expensive single-function boxes.

Listen to MP3 (0:15:34)
Listen to OGG (0:15:34)

[Transcript:]

Gordon Haff:  Hi everyone. This is Gordon from the floor of OSCON 2014, so if you hear a little bit of a buzz in the background that's all the people here in Portland, Oregon at OSCON.
Today I'm here with my colleague, Dave Neary, who's part of our Open Source and Standards team. His primary responsibility these days is ManageIQ, which is our upstream community for CloudForms, our cloud orchestration and hybrid cloud management software.
As we talked earlier Dave, we're going to be taking things in a different direction for this afternoon. You've been spending a lot of time thinking about networking, which is always the last child, so to speak, to get the shoes when it comes to new types of functionality.
People think about compute, people think about storage, and then people are like, "Well, maybe we ought to do something about that networking thing, whatever that networking thing is, exactly.
You've been spending some time thinking about this, so maybe introduce yourself briefly, and introduce us a little bit to the context of software-defined networking and some of the things you been working on and thinking about there.
Dave Neary:  Having finished the open sourcing ManageIQ, I'm now looking more and more at the network. Which is, as you said, the next place where we need to make some progress.
Software defined networking, that's really interesting, the separation of the control plane from the data plane, the ability to define virtual networks inside and independent of the physical network topology.
I've been concentrating a lot of network function virtualization, which is taking these old, not necessarily old but single‑purpose proprietary boxes and turning them into virtual machines running on standard x86 servers.
Some of the concerns there, some of the constraints that we're working on in network function virtualization is that some of the network functions, things like VoIP services with IMS, or voice, data, video, converged 4G services with services like EPC, those are very, very sensitive to both latency and bandwidth.
Oftentimes, you need to be able to chain these things together. In terms of running them on an infrastructure as a service, a lot of the things we're looking at specifically through Red Hat and the NFV group and OpenStack, we're looking at how to make that possible on top of an infrastructure as a service right now.
Gordon:  Let's back up a moment here, because you've used a lot of acronyms, you've used a lot of terms.
Dave:  Oh, yeah. The telco industry...
Gordon:  I think that's only one of the things. I say this only half‑facetiously, that networking has been slow to permeate compute is because there is a different language.
Dave:  Very much so.
Gordon:  There are historically a very different set of concerns related to the telcos and the like.
Maybe for our listeners who aren't as immersed in this space, I think they probably understand you have compute, software‑defined compute. For our purposes here let's just call it a virtual machine, although obviously things like containers and the like would accompany it as well. Then you have storage. We can think of that as a disk and obviously there's block store and there's object, still fairly simple concepts. Now we need to connect these things to each other and we need to connect these things to the outside world.
Dave:  Right.
Gordon:  What does that software‑defined defining and connecting look like?
Dave:  The primitive in the network is a switch. What characterizes a switch is you have multiple machines that are plugged into the switch in different ports on the switch.
When you create a connection, the first time that you try to communicate between two machines on that same switch, the switch remembers where the packets ended up going.
Then they bypass all of the other ports on that switch, so you don't end up saturating your network. Now, with virtual machines you've got multiple virtual machines in each physical host.
You need to make connections between virtual machines running on one host and a virtual machine running on a different host that's potentially plugged into the same switch or into a different switch.
You have additional problems. You need to route the traffic between those. You need to create virtual switches inside your hypervisor node, so that basically the traffic from multiple virtual machines are handled separately.
All of this adds complexity. It adds complexity in terms of debugging. It adds complexity in terms of configuration. A software‑defined networking controller, an SDN controller, is a layer which in some ways abstracts away some of that complication.
It abstracts away the network hardware, the virtual switches that you're using, and it allows you to define your network topology at a higher level, and then goes away and it programs the switches and it programs the bridges.
It does what it's supposed to do to make sure that, in a multi‑tenant environment, traffic from one group of virtual machines is not seen by traffic from another group of virtual machines ‑‑ that you're separating that data.
That's just a very simple description of the software‑defined networking layer. A network function is a service which is provided on the network or to the network.
Something like, for example, a firewall would be a network function or an intrusion detection system, or, as I said, VoIP services, data services, and broadband access, for example. These are all network services.
Some of these are at the control layer and the control layer is where we define routing on top of your Internet, some of them on top of your network and some of them are on the data layer. In terms of actually individual packets.
Some of them are in the application layer. They're providing higher order functions that are independent of the TCP/IP or even data plane layers of the network stack.
These all have special needs. What you need from platform is you need the platform to provide those special needs and give them away to position themselves in your physical infrastructure on hosts which have the ability to satisfy those needs and the quality of service.
Gordon: Now, you as a system admin, as a system architect, you're interested in OpenStack. You're interested in essentially the software defined functions, moving from this, as we put special functions, single function, proprietary systems into a software defined system running on volume hardware, volume software.
What should I be thinking about now? What should I be getting ready for?
Dave:  I think the first thing you want to do is be conscious of the fact that in a data center the network’s important. I think we're aware of that.
Infrastructure as a service oftentimes is added to an existing data center in an ad hoc manner. You set aside a few machines and you run Open Stack on them and you see how it works, and then you maybe allocate more machines.
Before you know it, there's a problem and that problem is probably going to be somewhere in your network. You're going to have a virtual machine that isn't getting an IP address from the DHCP server which is running on an agent somewhere on another host.
Debugging that problem, if you haven't thought about your network and planned for how do you architect your network and what are you using there? What are the tools that are available to you in the network space?
It's going to be harder to make that transition when you want to you roll out infrastructure as a service into production. I think that's at the very highest level.
In terms of for specific verticals, specifically around telecoms, where network functions are vital that's the core competency of the teleco industry.
We're seeing several initiatives coming through ETSI, which is the European Telecommunications Standards Institute, I believe. The Linux Foundation, and OpenStack, as I mentioned, where we're really creating a world where adding features to the platform, which enable these network functions to be virtualized, which is going to enable the rapid addition of new services in that teleco vertical.
It's going to enable the reduction of costs, simplicity of management in those verticals, it's going to be really a game changer, I think in what's available in terms of telecommunications, VoIP, video streaming, all of these kinds of applications.
We're going to see very, very rapid progress in the coming years because of this move to NFV.
Gordon:  If I can summarize what I've been seeing out there; all this software defined everything is really is incredibly powerful idea both from an economics point of view and just from a flexibility point of view.
At the same time, to be realistic, as you say, there are complexities. We haven't even gotten into some of the complexities of the different types of network configurations you might have to set up there especially in the context of existing legacy data centers.
You can't just come in and throw away all your existing switches overnight someday because all of your applications, data centers are architected around this. So there's certainly a lot of learning and a lot of complexities associated there.
At the same time, you really can't just half way software define things. If you only use software defined compute, or you only use software defined storage, or some combination thereof, and just say we'll keep on doing networking the way we always have, you'll still get some benefit, but you don't get nearly the benefit if you do the whole thing.
Dave:  I think what it comes down to is, why did we move to software defined compute, software defined storage in the first place? And it's because, what were we doing before?
If you wanted to provision a host, if you wanted to provision a computer, it took hours, days, weeks.
The move to automation, the move to scripting, the move to statelessness, the move to defining the process by which you create an environment allowed you to go faster.
It allowed you to innovate faster because you are no longer in the situation where every time you wanted to deploy a new version, you cross your fingers and you tell the team that they're not going home that weekend because they're going to spend three days in the office making sure the thing works.
When you make your deployment cycle shorter, you increase innovation. You allow yourself to adapt faster to market needs.
If you want to add a switch, or if you want to add a host to the network, that's the bottleneck. That's the place for you actually have to pick up a phone and call somebody, and have that done manually. And all of the scripts are custom scripts, and all of the switches are configured in a very particular way.
Automating that is going to bring the same benefits to the network engineers that it brought to operations engineers over the last five years, and which is now becoming a mainstream operation as you want to reduce your deployment cycles to the minimum possible.
You can concentrate on innovating in your development, and concentrate on high quality operations at high availability, high visibility, situational awareness.
We want the same kind of thing in the network, and that's what this moved to software defined network promises.
Gordon:  I think you probably allude to some of the organizational changes that this will require in many cases as well, because I've seen similarities before, for instance with some of the Blades first came into data centers.
It was a naive assumption that these separate groups could just come together in a single, in this case server converged hardware platform, and it would all work happily together.
Of course, that doesn't just automatically happen, so that's something that CIOs and their organizations need to think about in the context of this software defined networking as well.
Dave:  Sure. One of the four legs of the DevOps movement is culture. It's the most important one, and really, if you're trying to develop, if you're a web application developer and you're moving to the web, you're moving to the cloud, there is a cultural change.
There are a number of things that you have to do differently. It's not just an operations problem, it's also a development problem.
In the same way, network operations, by moving to software defined networking is not just the network engineer's problem. It's also going to be part of the development problem, it's also going to require cultural change throughout the organization.
Gordon:  It's not that it's meaning "Oh, we don't need people who understand networks any longer." Anything but, because you can't just say "Well, the networking is Joe's problem, and I don't need to worry about it." Storage, compute and networking become everybody's problem.
Dave:  Right. And in the same way that DevOps has not gotten rid of operations people. Good operations people are more valuable now than they have ever been.
Software defined networking is not going to get rid of good network engineers. Good network engineers are just going to spend all of their time doing value‑add work instead of undifferentiated heavy lifting.
Gordon:  That's probably a good point to leave off on. I think one of the lessons here is, software defined everything really brings everything together in a sense.
Whether we're talking about hybrid cloud management, whether we're talking about organizations and cultures within IT organizations, it's going to be harder as we move forward to treat things like these little islands that don't need to worry about what everyone else is doing.

Even as we talk about things like micro services and so forth, which aim to isolate things to a certain degree, there's still a certain common culture and a common set of responsibilities that people need to think about.

Thursday, July 31, 2014

Podcast: Patents and open document formats with the OSI's Simon Phipps

Simon talks recent US software patent case decisions and why they're so significant as well as the recent UK government decision about open document formats. Who are the winners and the losers?

Some links:


Listen to MP3 (0:21:47)
Listen to OGG (0:21:47)

Licensed CC-BY-SA 4.0.

[TRANSCRIPT]

Gordon Haff:  Hi, everyone. This is Gordon Haff with Red Hat Cloud Product Strategy, and I'm here with someone that many of you probably know, Simon Phipps, who, among other hats, is the president of OSI, the Open Source Initiative.
We're here at OSCON this week, and I grabbed Simon partly because there's been a variety of recent news which, for those of us that believe in open source and reasonable intellectual property regulation, I think is pretty good news.
Simon Phipps:  I think you're right. Nice to see you again, Gordon. There's been several things in the last month that have been really very exciting indeed. To go in reverse order, the UK announced that it's standardizing an open document format this week, which means that all future UK government work is going to be using an open standard.
That means that there's now a choice of tools that UK citizens can use to interact with their government. They can use Microsoft Office, but they can now also use LibreOffice. They can now use AbiWord. They can now use a variety of free and open source software, which I think is some great news.
That probably wasn't what you were thinking of, though. You were probably thinking more of the US court of appeals for the federal circuit made a decision about two weeks ago. It was around July 10th I think it was that it came out. This was a finding in the Digitech case.
Digitech are a patent troll who are associated with the IP hoarders Acacia, and they were suing pretty much everyone you've ever heard of that does digital imaging for infringement of a fairly fundamental patent on image profiles. They were suing Mamiya and Pentax. They were suing B&H Audio in New York. They were suing Buy.com. They were suing laptop companies.
The court of appeals had slightly delayed the case waiting for the result of a Supreme Court judgment on another software patent case, which was the Alice v. CLS Bank case.
Gordon:  We'll be talking a little bit more about the document formats, which I think is really interesting. I'd like to dig a little deeper into and maybe to explain to our listeners, how does the federal district court relate to other district courts and relate to the Supreme Court? Why do you think this is a particularly interesting ruling?
Simon:  If you are sued for patent infringement, it will probably be heard by a court of the attacker’s choosing. Commonly these cases end up in a court in the east district of Texas where the court has a fairly clear preference for finding in favor of patent holders.
But if you then appeal one of those suits, patent cases get heard by the federal circuit. The court of appeals for the federal circuit is the bottleneck or the choke point where all of the appeals over software patent cases end up.
Traditionally as a court, they too have had a tendency to find in favor of patent holders and to uphold pretty much every patent case that's brought before them where there isn't an obvious reason not to.
A change in behavior or a change of precedent that affects the federal circuit court of appeals is very significant. It affects the whole of the US. It means that patent actions that take place across the whole country have now got a new dynamic. A patent aggressor can no longer take it as read that a local victory is also going to mean a federal circuit victory. That's what I think the Digitech case is so significant.
It's also significant in that Digitech was suing a lot of people, and it's a significant fact that all those people are no longer burdened with expensive and unnecessary litigation.
Gordon:  That's certainly one of the things that happens with a number of these IP cases. They can even cascade down into the individual consumer, user level, which can have a real chilling effect.
Simon:  The worst thing about patent troll actions is that you typically don't know about them. Because there's been a fair amount of documentation now from researchers about how patent cases work. Typically, a patent troll will offer to settle with you without ever going to court. They will set the price of settling with you to somewhere that's just a little below the cost of your first court case.
As a consequence of that, many people will decide to pay the danegeld. That's an expression from an old Rudyard Kipling poem where he talked, he said that those that pay the danegeld, which relates to a historic tax that the invaders of Britain and other places would have on their new subjects. He who pays the danegeld never gets rid of the Dane, Rudyard Kipling says.
A lot of people settle out of court. They also sign an NDA to say that they won't disclose the fact they settled or the amount they settled for, and so we never find out that these cases have been doing on.
The difficulty is that patent law is shaped so that it depends on going to court to correct injustice. There's no way to correct injustice any earlier in the process. The US patent office, because they're overburdened with huge amounts of applications that they have to deal with, they tend to leave errors of judgment, errors of approval, for settling in the courts.
But patent trolls make sure that even if they have patents which are very questionable, they never reach the courts, because people are too afraid to engage in litigation, and they're also too afraid to act collectively because of the NDAs they've signed.
Again, these actions are very significant because if you know that you could get to the federal circuit and win, you may well decide that you're not going to allow yourself to be shaken down by the troll in the first stage.
That will mean, well no more of the cases are happening. That will mean there will be more opportunity for collective action against the trolls. This will all eat into the trolls' business model, which is to make enough money from the early cases to fund the litigation in the later cases. If you can snuff out those early cases with precedent, then you're on your way to minimizing the problem.
Gordon:  Good news in the patent front.
Simon:  I think it's good news. The other thing that was really significant about the Digitech case is it was the first use of the Alice precedent.
Gordon:  Which was the Supreme Court.
Simon:  The Supreme Court, and Alice v. CLS Bank, Alice Corporation is an Australian company that owns a patent that relates to the minimization of risk in financial trading. CLS bank decided to implement the algorithm without buying a patent license from Alice, and Alice sued.
CLS countersued. It went through the courts; it went to the federal circuit. The federal circuit found they couldn't easily resolve the case, so it went to the Supreme Court.
The Supreme Court in their judgment created a very clear test to work out whether a software patent was going to be valid or not. What they said was that, they said that there could still be software patents, but that simply taking something that is not patent‑eligible like an algorithm and then claiming that it's patentable because it runs on a computer is not sufficient to actually establish patentability.
They said that to get a software patent, the software that you have has got to improve the computer significantly. Because of that, the standard for getting software patents has been dramatically increased by the Alice decision.
The federal circuit court then referred to the Alice decision, and decided not even to proceed to find out if there had been infringement on the Digitech case because they declared that the image processing software was not a significant improvement to the computer. Rather, it was a computer implementing a non‑patent‑eligible technique.
Gordon:  Simon, I think you do have to give yourself a little credit here. Because as I recall, maybe the last time I did a podcast with you, which might have been OSCON last year, you suggested this might actually be one of the paths towards the rationalization of the patent process without just getting rid of software patents entirely.
Simon:  I deserve no credit whatsoever. The people who deserve the credit are the people actually coming up with the ideas. Mark Lemley is a distinguished academic, he's a law professor, and also a practicing lawyer. He was actually the case lead for the people in the federal circuit who were fighting Digitech. I think he deserves a great deal of credit, as do some folks from EFF.
Having said that, OSI was one of the parties filing an amicus brief in the Alice‑CLS case in the Supreme Court, so we've tried to do our bit on behalf of the open source community to step in there and change the law.
I think the dream of getting rid of software patents completely is still a ways off, but I believe the actions that are being taken now dramatically reduce the risk for innovators in the open source domain.
Gordon:  Let's switch gears to your homeland, the UK, and the ruling around document formats there. First of all, maybe you could explain in just a little more detail exactly what the determination was. Secondly, who does this affect?
Simon:  What's happened over there is in the UK we have a portion of government called the cabinet office. The cabinet office is the administrative hub of the government. They are the office of the cabinet. They facilitate cabinet meetings by the minister of state. They also act as the supervisory body for all of the departments of government. They set policy for all the departments of government about how they administer themselves.
They've been engaged in a review of how IT should be procured. In particular, they've been looking at requiring open standards. They've been looking at requiring open data formats, and they've been looking at reducing deal sizes so that open source companies are able to bid for government business, which are all very positive steps.
They made a determination a while back that they wanted a very critical part of government work in the UK to be conducted using open document formats, so that documents could be manipulated by citizens without the requirement to purchase software from a single supplier.
What happened yesterday was an announcement from the cabinet office. It was an official announcement made by the minister of the cabinet office, so a minister of state.
The announcement was that all future documents published by any government department for collaboration or viewing shall use open document formats. Specifically, documents that are only to be viewed must be in PDF/A or in HTML format. Documents where collaboration is going to take place must be in open document format.
Gordon:  There's often issues the fidelity of the document formats and how they convertible they are. As you talk about presentations and the like, is there anything around how convertible things like the particular ODF implementation needs to be?
Simon:  Honestly, you've got to give Microsoft their due here, and it's Microsoft you're referring to. In Office 2013 and in the current version of 365, they've got really good ODF 1.2 support that is, as long as you make intelligent decisions around your documents, is also interoperable.
When I say intelligent decisions about your documents, it's really important that you use free fonts when you're working with documents if you want them to be interoperable. Because no matter how good the document fidelity is, if you've used a font that is only available on a single platform, the way it's rendered on other platforms is not going to be correct. It's really important to use free fonts so that everybody can have them installed on their platforms.
Having said that, the big losers from this are actually Google. Because Google's recalcitrance over ODF means that Google Docs really don't have workable ODF support. That means this decision locks Google out of government procurement in the UK.
Gordon:  Even though a lot of people were jumping to, oh, this affects Microsoft, from your perspective it actually affects Google a lot more than Microsoft?
Simon:  I think it affects Google a lot more. I think Microsoft are actually going to do quite well out of it, because ODF support is in Office 2013. It's not there in Office 2011 by default. People who've got old versions of Office are going to have to upgrade to comply with this. Microsoft is going to see a little burst of upgrade activity as a result of this.
They made a quite negative statement about it. I asked them for comment, and they sent me quite a negative statement about it, but I think they stand to win from this. It's quite a good save, because I was involved in establishing Open Document Format back at the beginning of the last decade.
If Microsoft had engaged at OASIS in 2002, we would probably never have had any controversy. But it was their arrogance at OASIS in relation to ODF that created the whole crisis. I think they've pretty effectively recovered from that crisis now. I think Office 2013 has got pretty good ODF support.
The UK government now requires you use ODF. There is no interoperability. There's no inter‑document format conversion going on. Feature disparity is going to be much less of a problem.
Gordon:  How do you see this affecting elsewhere in the EU, elsewhere in Europe?
Simon:  The UK is a very important market for companies that are trading in Europe. The policies that the cabinet office has been working through are highly influential. There are a lot of European governments that are looking at these issues.
The European commission has rather dropped the ball on open standards. In particular, they have been unwilling to have a royalty‑free requirement on open standards. Because of that, vendors have been able to continue engaging in lock‑in even with standards.
Because standards don't protect you from lock‑in. Many standards come with requirements for you to buy licenses or to take some other action in order to use the standard.
For example, if you want to have a mobile phone, all the protocols your mobile phone uses, they may be standards, but they're standards that you have to buy a royalty license in order to implement. Take video formats. If you want to use MPEG, that's all very well, but you've got to actually buy a license from MPEG LA in order to write the software that manipulates those formats.
Now neither of those things are open standards. Both of those are standards that require you to seek permission in order to innovate. Open standards are standards where you don't have to have permission to innovate.
The UK government's determination that open standards are important, its definition that open standards mean truly open and not just public. Both are very influential in Europe, and I think we'll see other European governments deciding to pick up the UK's thinking and coursework, so to speak, and implement it themselves. This is quite a significant point for Europe, I think.
Gordon:  In other words, you're not ready to retire, but some good news.
Simon:  Yes, and I think is good news. It's taken, what, about 15 years for us to come from a point where nobody could possibly imagine anyone other than Microsoft being in the market to a market where Microsoft has to scrabble and behave well if they want to stay in the market.
That was all brought about by open source software. If Open Office and then LibreOffice had not been doing what they did, we would have seen Microsoft still having a monopoly on the desktop.
Gordon:  I think in some ways that's the greatest effect that those have had.
Simon:  Yes. It's actually very satisfying to look at. The other thing that causes quite a lot of people a visua cognitive dissonance here at OSCON is going out on the show floor and seeing an enormous Microsoft open source stand out on the show floor, which, again, none of us would ever have expected to see 15 years ago when we were getting started with ODF and Open Office.
Gordon:  I think at some level Microsoft has learned that even if they're not an open source company to their core, by any means, they do at least play in the game, and they need to play in the ecosystem.
Simon:  I was explaining to somebody from Microsoft yesterday that, however, this isn't the end of their journey. Because they're still making significant revenues by shaking down revenues that are using open source and claiming there are patent infringements on software that they've never been involved in, never contributed to, and can't prove that they have a patent on.
All the time they continue at what I call being big trolls, then our respect for them is going to be at best diluted.
They've still got to finish that journey. They've got to recognize that community members don't attack community members with patents. I think when they do that, they will then have been able to join the open source community as a full peer.
Gordon:  Maybe they need that as a sticker for the next OSCON.
Simon:  Maybe. There's still plenty to do in open source. That's why I'm still carrying on with OSI. I've got another two years before I'm term‑limited off OSI, but we're significantly transforming OSI. Because we did this, I wrote in InfoWorld this morning. This is the golden age of open source. Now more than ever, we need to educate people what that really means.
People assume that everything is going to be open. They don't necessarily take the steps that are required to actually make things open.
Gordon:  I still see out there talking to IT people at large companies, you still hear statements about open source that you're sort of like, did you just crawl out from under a rock for the last 10 years? I think it's still surprising those of us in the whole open source ecosystem and community how limited the understanding still is in some circles about security or safety and risk and so forth.
Simon:  There continues to be a market for education on, for example, why security through obscurity is bad and why open source, while not guaranteeing your security, makes it easier for you to ensure your security.
There still needs to be some work done on how open source is not about money. The early use of the word free to describe open source software means a lot of people are fixated with money. They want to use a money frame all the time about open source.
Open source is about flexibility. Open source is about being able to innovate without permission. It's about getting out of the way and letting people get on. That's why we have open source licenses. You hear people saying open source licensing is irrelevant, we don't need to worry about open source licensing.
That's complete rubbish. You need to make sure that your code is under an open source license, not to satisfy some lawyer somewhere, but in order to empower other people to collaborate with you without having to get your permission first. When you get these things right, that's good.
We still need to keep on doing this education. It's to a certain degree surprising that 15 years after the start of the open source movement, we're still having to explain that it's not about free stuff, that licenses matter, that a level playing field is key, and that contributing is in your own best interests. OSI is continuing to have those messages.
Gordon:  We all still have lots of work to do.
Simon:  Yes, still plenty to get on with.
Gordon:  Great. Thanks for your time. There's lots more things I'd like to talk about, but in the interests of our listeners' attention span, I think maybe we'll break now and look forward to next time. Hopefully, there will be some more good news.
Simon:  Absolutely. Thanks very much.

Gordon:  Thanks, Simon.