Thursday, March 28, 2013

Why PaaS is such a useful abstraction

Openshift logo

In February, I wrote about how Platform-as-a-Service (PaaS) has become an approach that appeals not just to developers, its original audience, but also to admins and architects. It allows IT operations, admins, to define policies and standardized environments for developers and then turn things over to developer self-service and the "machinery's" automation to do the rest. It yeps architects to standardize the developer workflows and accelerate application development in their organization.

In this post, I dig a bit deeper into why PaaS is able to appeal to all these different roles. (Giving credit where credit is due, I thank my colleague Gunnar Hellekson, who is the Chief Technology Strategist for Red Hat's US Public Sector group. This post was inspired by a back-and-forth over email we had following that earlier post.)

The fundamental reason, I contend, is that the PaaS abstraction sits at a very organizationally useful place for many types of application development. More so than Infrastructure-as-a-Service in many cases. (IaaS does offer more flexibility in certain respects. However, in the case of Red Hat's OpenShift PaaS, multi-language and multi-framework support—in addition to a plug-in cartridge architecture—gives a lot of flexibility within a PaaS model too.)

Slide1

To see what I mean, consider the difference between IaaS and PaaS from the perspective of what the consumer of the service cares about and doesn't care about. (Read "doesn't care about" as "doesn't have any visibility into.")

Unlike IaaS, in the case of PaaS, the operating system and application platform, for the most part, are abstracted away from the user (Java or web developer). I say "for the most part" because aspects of the application platform, such as language version or framework, certainly matter to the developer but underlying aspects of the platform that aren't exposed in the form of APIs, programmatic interfaces, or language features don't. 

So what?

 

 

Here's how Gunnar put it to me.

One of the greatest benefits of a PaaS is its ability to create a bright line between what's "operations" and what's "development". In other words, what's "yours" and what's "theirs".

Things get complicated and expensive when that line blurs: developers demand tweaks to kernel settings, particular hardware, etc. which fly in the face of any standardization or automation effort. Operations, on the other hand, creates inflexible rules for development platforms that prevent developers from doing their jobs. PaaS decouples these two, and permits each group to do what they're good at.

If you've outsourced your operations or development, this problem gets worse because any idiosyncrasies on the ops or the development side create friction when sourcing work to alternate vendors.

By using a PaaS, you make it perfectly clear who's responsible for what: above the PaaS line, developers can do whatever they like in the context of the PaaS platform, and it will automatically comply with operations standards. Below the line, operations can implement whatever they like, choose whatever vendors they like, as long as they're delivering a functional PaaS environment.

We spend a lot of time talking about why PaaS is great for developers. I think it's even better for procurements, architecture, and budget.

In other words, PaaS is a nicely-placed layer from an organizational perspective because it sits right at the historical division between operations roles (including those who procure platforms) and application development roles—thereby allowing both to operate relatively autonomously. We also talk about PaaS as enabling devops, which is to say the melding of developer and operations roles. But what we really mean when we talk about develops in this context is that operations can setup the platform and environment for developers and then the PaaS itself takes care of many ongoing ops tasks such as scaling applications.

Is this division of roles set in stone? Of course not. And, in fact, some companies that grew up with the cloud have done that to various degrees. Adrian Cockroft of Netflix has written and spoken about the topic frequently and I recommend his blog highly. (Adrian has referred to PaaS as "NoOps," partly for the reason cited above—i.e. that it's no so much that developers do ops in a PaaS but that the "cloud" does ops. All of which is also a useful reminder not to get too wrapped up in xOps labels.) 

However, in the more general enterprise case, there's a lot of inertia to organizational structures and skill sets. As the saying goes, "technology is easy; people are hard." As a result, useful abstractions that map reasonably to existing roles and responsibilities are valuable. And that's what PaaS does. 

Links for 03-28-2013

Monday, March 25, 2013

Links for 03-25-2013

Podcast: Red Hat OpenShift Enterprise with Joe Fernandes


PaaS started out as a tool for developers. But on-premise commercial products like OpenShift Enterprise now make PaaS a valuable tool for many different roles within enterprises. Joe Fernandes heads OpenShift product management at Red Hat. On this podcast, he shares what enterprise customers have been telling him about PaaS.

References:


Listen to MP3 (0:17:51)
Listen to OGG (0:17:51)

Transcript:


Gordon Haff:  This is Gordon Haff, cloud evangelist with Red Hat. I'm sitting here today with Joe Fernandes, who handles the OpenShift product management team. Welcome, Joe.
Joe Fernandes:  Thanks, Gordon.
Gordon:  We've been on a little bit of a OpenShift run in terms of podcasts recently because there's been a lot of activity going on. Today I'd like to focus in specifically on OpenShift Enterprise, which is our on‑premise commercial offering of Platform-as-a-Service for organizations that want Platform-as-a-Service but want it on‑premise. Joe, we launched OpenShift Enterprise back in November right around Thanksgiving. Can you maybe tell us where things stand?
Joe:  Yeah. Thanks, Gordon. Things have been going great. As you mentioned, we launched the first GA release of OpenShift Enterprise 1.0 back in November. It's been a whirlwind since then. We've been involved in a number of customer engagements, different evaluations, and proof of concepts. Talking to our customer base and reaching out to new customers who have learned about what we're doing in the cloud space and in PaaS specifically. More recently we launched the 1.1 version, which was our first point release. We're in the process of launching another release, 1.2, which will be timed around Red Hat Summit. We're being quite aggressive on the development side in terms of rolling out new releases and integrating the features and enhancements and extending the platform to suit the needs of our users.
Gordon:  What are some of the things that maybe have surprised you a little bit as we rolled out OpenShift Enterprise? This is a new area in Platform-as-a-Service, which started out predominantly, like our own offering, as these hosted services. What are you finding now that you're talking to a lot of enterprises?
Joe:  One thing is the broad interest in Platform-as-a-Service among enterprise customers. We're talking to some of the largest companies out there and folks that may have reasons to be a little bit more reluctant to move applications to the cloud, to use some of these newer cloud services. The interest among enterprise customers is great to see. Then within those customers, the interest within the different teams. I think we talked about this last time. PaaS is often pitched to developers, but we're talking to just as many administrators, operations folks, and enterprise architects about what PaaS can do for their organizations as we are strictly talking to developers and development managers.
Also, the use cases. We're talking to a lot of customers about PaaS not only for dev and test but for production. That's really great to see because people are thinking about this as something that they can use to take applications across the life cycle.
Gordon:  One of our colleagues, Dan Juengst, has talked about this idea of a software factory. [What] that really involves, though, is you can't just have a factory for your applications in part of their life cycle. It really needs to be a workflow that takes you from the applications being written to the applications running in production.
Joe:  Right, absolutely. There's a few things driving that. First, obviously, the IT operators, the administrators. One of the things they need to do is serve the needs of their developers that are responsible for building these new applications as services. Oftentimes, enabling those developers with new platforms, software and so forth so that they can stand up these applications, they can do their development, and do their testing, can be a challenge, especially given the proliferation of new applications and new services being developed across the enterprise.
We've also gotten into the challenges that they face once those applications move to production. They not only want to reduce the amount of time that it takes to get their developers and their QA teams enabled, but for themselves, they're deploying applications to production at a rapid pace. They want to be able to accelerate that process to streamline the process of moving those applications out to production. Looking at Platform-as-a-Service is a mechanism for maybe helping them do that.
Gordon:  One of the things I find interesting about the enterprises adopting PaaS. When people started talking about the cloud at first, there was one school of thought that everything was going to go Software‑as‑a‑Service. People weren't going to need to write applications any longer and so forth. That certainly has happened to a degree with certain types of applications ‑‑ a Salesforce CRM, for example. One of things that has come out in the last year or so is that these tools to make it easier to develop applications have actually created this real renaissance of writing applications to support specific industries, specific business requirements and so forth.
Eric Knipp of Gartner actually was speculating that maybe we're going to have a golden age of enterprise application development brought on by PaaS, because there really is this incredible appetite for applications that support a particular company's business. In the past it was retarded by the fact it took a lot of effort to write those applications.
Joe:  Yeah, absolutely. A couple of points you made there. First, there is a tremendous appetite within the enterprise for tools and methodologies that can help accelerate application development. Despite the availability of packaged applications available through software‑as‑a‑service, it hasn't diminished the need for custom application development. Oftentimes, a lot of these applications are even developed around services.
We're talking to customers that may be using a Salesforce, but then build a whole series of modules that plug into Salesforce or applications that pull data from Salesforce or whatever they happen to be using for CRM, ERP or what have you.
These are what we refer to as systems of engagement, systems that basically take that data from ERP, CRM, or other applications which are traditionally systems of record, and bring that closer to users ‑‑ whether it's customers, whether it's partners, or whether it's internal employees and so forth.
Again, these may be Web applications, but they may be mobile applications. They may be social applications. They may be location‑aware. Again, as the different types of applications have grown, the enterprise needs to keep up and needs to be able to build these things faster.
What you see is a long tail of applications within the enterprise. You see a handful of big, strategic, applications that probably still run on hefty systems. Maybe they're not even running in virtual environments today. Then you see a much longer tail of these systems of engagement, these smaller applications that address specific use cases or target specific areas of the business.
The question is, how does IT support all that? How do they manage a handful of mission‑critical systems, but still enable the business to do what it needs to do across all these applications?
The second point you made is, there is always an adoption curve within the enterprise. Particularly, as you get into larger organizations, there's certainly more risk. There's more concern about making sure that they're not exposing themselves to risk as they adopt new technologies.
We saw this with virtualization, which in the early days was something that enterprises were still getting there hands around. Now it'd be hard‑pressed to find in enterprise that doesn't have a significant portion of their infrastructure virtualized.
Software-as-a-Service. Most enterprise that you talk to are leveraging one or more significant SaaS‑based applications to run their business. Even Infrastructure-as-a-Service, I've been struck by the number of customers that I've spoken to that already have Infrastructure‑as‑a‑Service‑based clouds, either in operation or at least in development in something that they're thinking about.
Platform-as-a-Service is the next wave that really hasn't quite made it into the enterprise in a large scale and so forth, primarily because it hasn't been something that enterprises have been able to consume in either an on‑premise or a hybrid model.
That's the first thing we did with open OpenShift Enterprise. We brought Platform-as-a-Service and these capabilities that you barely saw in public PaaS providers. We brought it in an on‑premise, private, PaaS deployment model to enterprise customers and enabled hybrid. We're continuing to see great traction with that.
Gordon:  Let's talk about some of those specifics here, going from dev test to production. What are these requirements that these customers have that changes with something like OpenShift Enterprise that makes that now suitable for what they want to do, which previous offerings really didn't meet their requirements or at least their perceived requirements?
Joe:  When you're talking about running production applications within a PaaS, it's really not much different than what you think about for production requirements in general. People are going to be focused on scalability. They're going to be focused on security. They're going to be focused on compliance issues, which may be specific to their particular business or region of the world that they operate in and so forth. When they talk to us about our PaaS platform, they want to know about these things. They want to know, again, on the one hand, how's it going to meet the needs of my developers, my QA folks or the dev and test environments that we hope to bring this to. But then, if I bring it into production, what's your story around high availability? What's your story around security? How are my administrators going to manage this stuff?
In a production setting, it probably is no longer developers that are deploying those applications. It may be an application administrator or an operations person that's going to be handling those deployments. They need to handle those oftentimes through some kind of a workflow or in a managed way and so forth.
These are things that, when you think about production environments, really start to change the game and become something that I think PaaS is evolving to in terms of being able to address those needs.
Gordon:  Of course, it has to the support the development environments like Java EE. They want to run their enterprise prize apps. I think you want to be very clear on one thing here. This isn't about on‑premise versus hosted offerings. I certainly expect, in many cases, you're going to see enterprises adopting some sort of hybrid architecture. In fact, many of them are effectively doing that today. The issue in many cases today is the stuff that isn't on‑premise is this shadow IT that isn't necessarily being managed in a systematic way.
Joe:  Definitely, for many organizations, particularly larger enterprise organizations, it's going to have to be a hybrid model. They're not going to be able to entirely run their business in the public cloud like startups or many small businesses may do today. They need to be able to evolve at their own pace. I think Red Hat is uniquely positioned to help them do that with not only our PaaS offerings but our hybrid cloud approach in general. That combines private on‑premise cloud deployment with what folks can run on the public cloud providers like Amazon and others. I think we have a great story there.
Gordon:  We should also mention to our listeners. This is all open‑source. There's a community version of this, OpenShift Origin. Of course, we do still have our hosted offering, which is probably the easiest way if somebody just wants to get a sense of what this OpenShift thing is about. Just go to OpenShift.com, sign up for a free account, and give it a try.
Joe:  In fact, you've hit on a number of things there that really make us excited about what's going on here on the OpenShift side and in the Red Hat cloud business in general. It starts with the fact that Red Hat is built on an open‑source model and has proven that they can make that successful in the enterprise. Like all of our other products, OpenShift is based on an upstream open‑source community, which is OpenShift Origin. All of the source code for our PaaS platform is freely available on GitHub, Apache 2.0 license. We're building a community of contributors that are helping us build the features and the functionality that we need to drive this platform forward. Then we take that in OpenShift Enterprise and we harden it and commercialize that for enterprise distribution.
Again, this is something that enterprises need because what we're going to do in OpenShift Enterprise is, we're going to handle not only support but things like managing security updates, patches, compatibility from one release to the next, binary compatibility and stability that you don't always get in open‑source upstream communities because there the focus is on innovation.
Again, just like we combined the innovation of Fedora with the enterprise nature of Red Hat Enterprise Linux, we do the same thing across all our products. It's no different in PaaS. The one difference in OpenShift is we have a third leg of that stool, which is the OpenShift Online service, which is also based on this OpenShift Origin open‑source platform. That's really important.
It also all runs on Red Hat Enterprise Linux, which is important for a couple of reasons. It's a hardened platform that businesses trust to run their most critical applications, but it's also something that administrators are familiar with. We're trying to take those administrators and help them evolve their RHEL infrastructure, their Linux infrastructure into a cloud infrastructure with OpenShift. Again, that's something we're very excited about.
Gordon:  One of the interesting things about having had the OpenShift Online service here is that it has given our engineers a lot of good experience at what's needed to scale up a Platform-as-a-Service here and put the security policies in place, the cgroups policies, to get multi‑tenant performance so that you don't have resource hogs. It's given us this platform to test scalability that I'm not sure we've ever quite had before.
Joe:  Yeah, it is one of the challenges in enterprise software development. You never quite get to run the stuff at the same scale that your customers do. I've been a product manager for almost 15 years, and I've seen this first‑hand. You can test it in the lab, you can test it with your QA organization, but where it's really going to be tested at scale is when it goes out to your customer base. This was a unique thing about OpenShift. Before we shipped the 1.0 GA version, we ran our OpenShift Online service at scale for over 18 months. At this point, we've seen quite a bit there. When you launch an online service and you open it up to the world with just an email address to sign up, you get some very interesting applications. Sometimes you get some very malicious applications. You get all sorts of requests from across the spectrum and so forth.
Although our sales team would have liked us to release products sooner, I think it was a necessary step to get us to where we are today. What we learned about security, what we learned about multi‑tenancy...
Again, we run the online service on Amazon, so we're paying the bill for all those apps that we're running there free of charge. Multi‑tenancy density is very important because we want to manage our own spend even as we're providing this service to end customers. We learned a lot about that and then learned a lot about the different features, as you mentioned earlier.
Developers don't just want a single‑language PaaS, they wanted different languages. We started out with Java, Ruby and PHP, added things like Node.JS, Python and so forth, and continue to explore adding different capabilities.
I think you'll see that when you come to OpenShift Enterprise, a lot of that experience has made its way into the product. As we look forward, we're going to be launching the commercial version of our online service later this year. I think you'll see it's reflected there as well
Gordon:  Great. Thank you, Joe, and thank you all for listening. As I say, we've got a lot of good OpenShift material out there. Check out OpenShift.com. Sign up for an account. There are also all kinds of how‑to guides and videos and other types of blog posts and what have you there. It's a product that's easy to get a really good idea of what it's about without having to do a big install on‑site. Again, thank you, Joe. Thank you, everyone.

Podcast: Working with OpenShift with Mark Lamourine

OpenShift is Red Hat's Platform-as-a-Service. Mark Lamourine shares his experiences working with the OpenShift Origin code from the perspective of someone outside the main engineering organization. Mark also discusses what he's currently working on around OpenShift and how interested people can get involved.

Resources:

Mark's posts on Google+
OpenShift Origin and related links (IRC channels, etc.)

Listen to MP3 (0:14:40)
Listen to OGG (0:14:40)

Transcript:


Gordon Haff:  Hello everyone, this is Gordon Haff, Cloud Evangelist with Red Hat. Today, I'm sitting with Mark Lamourine, who's a software engineer with Red Hat. We're going to talk about the OpenShift Community from a somewhat different angle, today.
A few weeks ago, I interviewed Matt Hicks, who heads the engineering team. He talked a little bit about what OpenShift is doing to make it easier for people contribute code.
Mark, although he works for Red Hat, is really coming at this a little bit from an outside perspective. Among other things, Mark will talk a little bit about what OpenShift is doing from a semi‑outside perspective around community. Welcome, Mark.
Mark Lamourine:  Thanks Gordon.
Gordon:  Can you tell us a little bit about yourself?
Mark:  I've been at Red Hat for three years. My background is in system administration and software development. I worked for awhile at Genuity, which is a defunct ISP. Most of the focus in my career has been on system administration. When I came to Red Hat, I got to work on writing documents that help the system administrators implement the things that they're working on.
Gordon:  What are you doing that's maybe is a bit different from what the people in Matt's engineering team were doing?
Mark:  What I do, is I work from the outside. I am not in the chain of command of Matt's regular engineers. I come at things from the standpoint of the system administrator who needs to implement this on the community box. I'm coming at things from outside. I build the boxes without the benefit of all the internal tools that the people here have, so that I can understand how someone who is coming to OpenShift, as an implementer, will see things.
Sometimes, the people who are doing the engineering lose track of what it is, they want to put the features out, they lose track of what it feels like to be a person who's not totally ingrained in the culture.
We're trying to foster a community. We're trying to invite people in. I'm trying to identify places where there would be blocks, difficulties, or confusion for someone trying to come in and to figure out what things they would need to know so that they can engage well with the regular engineering community.
Gordon:  I think by way of context here, OpenShift started with our online service, and that very much had an external focus on the developer. As we've had our open source OpenShift Origin, our OpenShift Enterprise commercial subscription offering, there's still very much focus on developer but, now people have to actually stand this up on sites, so system admins and architects, also have needs that have to be addressed now.
Mark:  Yeah, that's a change that's happened in the last few months. You're right. When we first put OpenShift online, the focus was on the application developer, that's really the focus of OpenShift in general, is to allow an application developer to not be a system administrator. Now, the service works, but someone in a company who wants to have an internal launch… They've got 400 PHP developers, who are all working on their laptops, in their own little environments, wants to create a standardized environment that they all use and where they don't have to be a system administrator on their own box.
Someone has to make that happen. We have packaging. We have a whole team who are addressing commercial opportunities that Red Hat has for companies that want to do this and pay us for it.
But we also have the community, where we might have a university or a small shop who wants to be able to set one of these up. I'm trying to address their needs. I'm trying to act as one of them and bring their concerns back to the engineering community inside, so that they get a perspective on what it's like to be that person.
Gordon:  This really has been one of the ways that OpenShift has been evolving since we spun it up about 18 months ago, is that there was a lot of interest in the online version. We still have, I don't know, how many applications, a lot...
But what's evolved, both with Red Hat and with the industry in general, is that a lot of companies that we talked to, even relatively small organizations, are like, kick the tires, we ran this online thing, and it's really great, but for something as important as our application development, this is really something that we want to have full visibility and control over.
Mark:  There are a lot of reasons why a company would want to have control over… There's the obvious things, like, we've got business information that we don't feel comfortable having out in the Cloud. Right now, our online stuff only runs on Amazon. But there are other reasons why as well. Not only the information there, but there's the reliability of your service. We are working on having a commercial offering with SLAs, but that's not there yet.
You might want to work in a situation where you're not competing with all of the other users who are on one of these boxes. That you can set up your own and do it. Again, we have an engineering team who's doing that for fairly large commercial interests.
I can see wanting to do this inside a small company or even with only a few people, so that they can collaborate well together. It suits itself well to that kind of shared collaboration, in conjunction with a service like GitHub or your own internal revision control services.
Gordon:  Let's talk about some of the specific perspectives that you've had working with the OpenShift community. Maybe talk about some of the specific things that you've done, for starters.
Mark:  With respect to the product itself, I've done a fair amount of work on the DNS backend services. DNS is one unusual backend service. Most services, you can set up your own little instance and you just run OpenShift or whatever. You can have this nice little self‑contained demo. DNS is, by it's nature, not like that. DNS is publishing. DNS is pushing something out where people can see it.
In most IT organizations, that service is pretty carefully guarded. You have to talk to other people to connect to their service, so that you can publish applications.
Another aspect of DNS is that it's probably the single most reliable and most used service on the Internet, so most people don't get a chance to kick it. They set it up, it just runs, and they don't have to worry about it. It's not until they start trying to publish something that they realize that there's more going on there.
I was surprised to find I have a unique perspective, because as I said, I worked at Genuity. I worked on the DNS servers that published a large part of the Internet during that period of time. I happen to have some background and it's been helpful and fortuitous, but I didn't do anything special for it. It wasn't what I was hired for. It was just something where I saw a need and stepped in.
That seems to be what my role is. If I see something that has a need and I have the background, I step in and offer something.
Gordon:  If somebody at systems admin, let's say, and they want to start playing with OpenShift in the community, what would your recommendation be?
Mark:  The first thing would be to look at the actual community offerings. The biggest activity is on the free note IRSE channel, #openshift or more specifically, #openshift-dev. The #openshift channel is more for application developers. The #openshift-dev channel is for people who are doing implementations, debugging, or working on internals. The second thing would be to look at the mailing lists, which are available on the OpenShift community website. There's specifically a dev channel there, a dev mailing list there. IRC is the big place to start. We've also started working, Krishna and I, on providing content on a Google+ community. All of those are available. They should be easy to find with a Web search.
Gordon:  What are specific types of things that somebody could do if they wanted to maybe start making some contribution?
Mark:  Get a GitHub account for the origin server branch and start looking at the code. That's really the way I approach it as well. I have been on the project since near the beginning, but there's an awful lot I don't know. The engineer's population has grown tremendously since the beginning. I don't know all the people anymore and haven't for a while.
The fastest way is to go look at the code. The GitHub site has fairly good internal documentation you could walk down the tree and look at individual pieces.
Again, get on the channel and ask. I'm one of the people. There are a number of us who are there and watch it all the time.
Even when I don't know the answer, if I see a question go up and no one responds right away, I will at least say hello. I might say, "I'm not the best person to answer this, but post it out here. Talk to me and someone who is will eventually come by, scroll back to the logs, see your question, and be able to answer it."
Gordon:  I've been seeing some traffic in the various lists that were also doing some things to make it easier just to get Origin installed.
Mark:  There's a lot of work there because it's not the easiest thing right now. It's not perfect in Fedora 18. We got the original packages into Fedora 18 with the release, because if we hadn't, it would have taken another six months we'd have had to wait for Fedora 19. But there's still a lot of work to do if you want to build a box on Fedora 18. There are packages that we need that aren't yet upstream. There's a repository that we maintain for Open Shift of those package that are still...We're still working on getting them upstream, but we can't stop development, so we provide them for the interim.
It's been moving. It's a moving target, how to install this stuff is a moving target. There are a number of blog posts, and really, when you find one, you want to look at the age, because the older they are, the more likely they are to be outdated. If you find the newer ones, there are going to be bugs in it. We're still working on it, and we're still getting it so that it's seamless.
But that's really the part I'm working on, is how do I, there are people who are working on how do I wrap it all up. I'm actually working on how to take it apart so that when something doesn't go the way you expect, where do you look? What do you do? That's really what my focus in my various blogs has been.
Gordon:  There's also a new cartridge architecture, cartridges being essentially the plugin mechanism for Open Shift that is planned to really make it easier to develop cartridges.
Mark:  It will. It's not an area that I'm really deep into right now. There are some really good people working on that, so I haven't really felt the need to get into it directly. But the big thing they're doing is providing cleaner interfaces for the user space code. One of the aspects of OpenShift that's a bit unusual in a PaaS is it's not a virtual machine. It's a multitenant system and that was one of the things I worked on early on, was figuring out ways so that you could have individual users that you can't denial of service.
You can't DOS someone else who's on the box. You can't escape. You can run your application there and not worry about whether or not somebody else is eating up all the memory.
What the new cartridge will do is nicely define the boundaries between what are user space tasks, what are system space tasks, and what are OpenShift setup tasks, which really fall in the middle?
Gordon:  What are you working on right now?
Mark:  A couple of different things. I'm actively working on the experience of building an OpenShift broker from scratch without using the package scripts, so that I can see what's going on underneath and talk about it. I'm also looking at the...I've got a pull request outstanding for putting the package build logic in a place where people expect to see it.
Right now, we have a set of OpenShift dev tools. They wrap the build process. It works really well, if you're working in a specific environment that we use for testing and for automation.
It hides a lot of the details of what's going on inside and it doesn't yet allow you to tweak specific areas, build a single package, or run through the build install test cycle, for a single package you might be developing.
What I'm doing is working to move those tasks closer to where the developer is by putting, in this case, a Rakefile, which is a Ruby equivalent of make, into each package directory. So that when someone is developing they look and go, "There's a Rakefile there." I can type Rake tasks and it will tell me what I can do. Then I can go to the top and I can say Rake build.
It will walk down the tree and I don't need a set of special tools to do that. I'm working to make it so that that build process is layered, so that you can get at it at each of the layers as is appropriate for your work.
Gordon:  Sounds like fun stuff. I should also mention to our listeners we have a community day coming up for OpenShift, in mid April, right in advance of the open stack summit in Portland, Oregon. If you're going to be in the area, stop on by.
Mark:  We've also started a Friday IRC hour from 9:00 o'clock Pacific time. Krishna and I, and a couple of other people, will be available specifically to answer open questions, Fridays at noon Eastern, 9:00 Pacific.

Thursday, March 21, 2013

Why apps will continue to make sense

Michael Elgan writes in Datamation:

To Google, ChromeOS devices, such as the new Chromebook Pixel, are Googly Smart Internet devices that support and enable the Google vision for the world. Yes, you can use them for Dumb Internet and Offline stuff, but they're optimized for being online and using Google's algorithm-based services.

Android, on the other hand, is a hybrid device of all three. Yes, it "delivers" Google's Smart Internet products, other app makers' Smart and Dumb Internet products, and also runs apps and data stored exclusively on the device itself, which are designed to function the same way whether there's an Internet connection present or not.

Like all companies, Google has limited resources -- especially limited engineering talent. The company wants to put "more wood behind fewer arrows," as Google famously once said in a blog post.

Instead of wasting time on the Dumb Internet and the Non-Internet, Google intends to put as much "wood" as possible behind the Smart Internet.

The Smart Internet is where Google can dominate everybody and really make a difference.

I don't have any argument with a contention that this is the world as Google would like it to be.


Mobile android samsungI do expect the needle to shift more towards HTML5 and network intelligence as connectivity (incrementally) improves. And the trend towards applications that depend on big data and big compute means that certain applications are inextricably children of the network and can't exist in a meaningful disconnected way.What I'm skeptical about is whether this will be the world as it exists in reality over interesting time horizons. Smart Internet vs. Dumb-Internet and Non-Internet is effectively a rephrasing of HTML5 vs. apps.

That said, we've also gained a lot of experience suggesting that local state is a really handy thing in mobile. You don't hear too many speakers using terms like always-connected at conferences these days. (A wise move given that the conference attendees are likely struggling with the conference WiFi.) In truly mobile settings—outside of company and university campuses or homes—we've seen lots of advantages coming from easily managed local state. Which is to say apps.

Community has been a critical OpenStack ingredient

Nice rundown on the private cloud market by Derrick Harris over at GigaOm. (Specifically, IaaS infrastructure—although categorization can still be a little tricky at the margins. For example, there are a variety of complementary IaaS management products that are beyond the scope of this discussion) The money quote:

OpenStack is what happened to the private cloud market and forced so many acquisitions, pivots and even one closure. Users, investors and everyone, really, were waiting for some promise of cloud interoperability and portability (aka something other than Amazon, VMware or Microsoft) and OpenStack delivered it. Further, for the service provider community — which has arguably bolstered the sales of private cloud software since its inception — OpenStack provided a relatively engineering-free path to public cloud offerings (compared with building their own from scratch, that is) without fear of being at the mercy of a startup that might fold tomorrow and take its core technology with it.

It's striking how quickly this market has evolved. Some of the companies discussed in the article—the names were all drawn from a June 2010 GigaOm report—never attained much of a profile. But, from that 2010 vantage point, others that looked as if they were in a good position to be important players appear to have faded. 

Openstack

The OpenStack story is an ongoing one, especially with respect to commercially-supported products based on the OpenStack project. But we can say that OpenStack already offers a great example of how open source combined with a robust community around that open source has great power. 

I mention the community (including the companies involved through the OpenStack Foundation—including my employer, Red Hat) as well as the code because it's the two together that have helped OpenStack gain such ground. OpenStack had a strong start in July 2010 based on contributions from Rackspace and NASA. But I'd argue that it was the creation of the OpenStack Foundation and the putting in place of an appropriate governance structure that really allowed the project to become the focal point for broad industry collaboration. 

Wednesday, March 20, 2013

Links for 03-20-2013