Saturday, January 12, 2019

Keeping my herbs alive: An indoor watering system

IMG 2815As I was again reminded in a recent Twitter thread, I like having fresh herbs. But you often don’t need a lot of them, which in turn means that it’s nice to have some pots of them growing at home so you can snip off just a little bit rather than buying 20 times what you need at the store. 

The problem is that 1.) I travel and 2.) I forget to water plants. One common low-tech method for automatically watering is to fill a wine bottle (or soda bottle, etc.) upside down in the soil. This works reasonably for a few days but my issue was the 2-week plant-killing excursions. So I looked around for commercially-available solutions.

There were a few systems online but at least one of them had to sit up above the plants, which wasn’t really feasible for my setup. And none of them seemed to have very good reviews. So I decided to see what I could come up with myself. I already had a small hydroponics system in the same room which turned my thoughts to supplying the water from an aquarium pump in a bucket of water on the ground hooked up to a timer. This is indeed what I ended up doing but getting to a system that actually worked took a fair bit of fiddling and experimentation.

My first thought was to “borrow” the aquarium pump I already had to keep my hydroponics system topped off in the summer and use some T-connectors to fan out the tubing so that I could water multiple plants. To make a long story short, the pump I had wasn’t powerful enough to lift water and force it through a network of T-connectors. Elevating the bucket on a stool helped. Sort of. But if I lifted it too high, the water started siphoning when the pump turned off. Furthermore, as I discovered when I experimented with different tubing sizes, in a network of tubing like this, it’s hard to get relatively even flows out of the different lines.

What ended up working the best had three basic elements:

A more powerful aquarium pump (head of about 8 feet)

A manifold intended to split the output of an aquarium air pump

A digital timer with 1 second resolution

The manifold was really the key thing here because it splits the flow pretty evenly. Furthermore, each of the outputs has a little flow control valve that lets you tweak the flows so you can account for different line lengths or different amounts of water for different pots. You just have to experiment to see how long you want to run the pump for. For me, it’s about 15 seconds which is probably a little on the short side; I sometimes end up watering a little every week or two. 

Now, in practice, actually building this was more complicated because of getting all the tubing sizes right. The big culprit was the pump. In order to get an aquarium/pond pump with sufficient head (pressure) you need to get one capable of vastly more flow than needed for this application. And because the pump is designed for that higher flow, its output size is fairly large so you need to adapt the tubing down to the significantly smaller diameter that’s appropriate for this application (and the manifold). I got it almost right; I had to use some epoxy paste in one place because I couldn’t find an adapter that was quite what I needed.

(There’s also a lot of inconsistency in how sizes are advertised, e.g. both the check valve and the manifold are supposed to be 3/8” but the tubing fits easily on one and was a very tight squeeze on the other. You may need to fiddle around.)

Finally, I added a check valve though it probably isn’t needed.

In addition to the parts listed above, here’s what you need:

  • Short piece of 3/4” tubing (I had some black pond corrugated pond tubing from earlier experiments)
  • You may need some epoxy paste if a connection is a bit loose connecting this tubing to the adapter
  • Multi-hose adapter
  • 3/8” tubing (probably ID)
  • Aquarium airline tubing (I believe it’s 3/16” diameter)

Finally I was trying to figure out what I could stick the pots in so that I didn’t need to worry if there was some overflow. I was coming up more or less blank. The options either weren’t really the right shape or they were a lot deeper than I needed or wanted (or both). Then I found the perfect thing: a 26” water heater drain pan. Just cover up the hole for the drain with duct tape. To make it even more perfect, I happened to  have a round 26” table up in my attic.

Now my herbs are happy and I’m happy. 

Friday, January 11, 2019

More 2008 redux: Open APIs

Given the discussion going on around API openness these days, I thought I’d resurrect yet some more text from an "Open Source vs. the Cloud" research note that I wrote in 2008. (See also "Open Source vs. the Cloud Redux” and this twitter thread.)

At the same time, to focus on source code is to focus on a specific type of openness and freedom that was important historically—but may not be as important going forward. Indeed, in the case of Web services running on massive server farms and cooperating over a network with all manner of other code, services, and data, the value of code is questionable. After all, you can hardly just load it up on a server and do anything useful with it anyway. One needs all those servers and interlocking pieces. Also, the ability to view, modify, and redistribute source code is only one of many rights or protections to consider in a Cloud Computing world. For example, consider these other things that might matter more:


Open APIs. Open Source as we know it today evolved largely in the context of Unix-like operating systems and the programs that ran directly on top of them using “libc” and other system libraries. While we may run monolithic programs over the network, much of the action in Web 2.0 has been in services such as Facebook, Flickr, Google Maps, and that expose application programming interfaces (API) at a higher level. This allows developers considerable freedom to extend these platforms. Thus, whether a platform or application is Open Source or not, given public APIs, it can be extended and consumed in ways that are very analogous to Open Source. At the same time, the predictability and transparency of the terms of service for APIs—especially in the case of consumer-oriented services—raise their own issues.

Thursday, January 10, 2019

The cloud vs. open source redux

If you’re reading this, you’re probably aware that there is a fracas going on around open source licensing. Quite a bit has been written on the topic and I won’t rehash the specific details here; they have been well covered by:

However, to net l’affaire out, long-simmering issues associated with building businesses on the back of open source software are boiling over. In particular, cloud providers like Amazon Web Services (AWS) not only rely on vast amounts of open source software to run their infrastructure but they’re increasingly offering cloud services that directly compete with the companies that created much of that open source software in the first place. Furthermore, there’s a widespread (largely justified) perception, that some of these providers in particular are taking from the open source commons far more than they’re giving back.

Some thoughts.

This is not a new concern

As an industry analyst, I wrote a research paper titled “The Cloud vs. Open Source” in 2008. That’s just two years after AWS debuted. Much of the paper cautions against getting too fixated on source code when thinking about user freedoms and openness generally. This remains true today and is the subject of an entire chapter of my book How Open Source Ate Software that I published last year.

However, I also argued that throwing up roadblocks to making use of open source software was ultimately unproductive.

Today, Open Source is widely embraced by all manner of technology companies because they’ve found that, for many purposes, Open Source is a great way to engage with developer and user communities—and even with competitors. Therefore, the concern that, left to their own devices, companies will wholesale strip-mine Open Source projects and “take it all private” seems anachronistic. That’s not to say that everyone will always contribute as much code without copyleft as with it, but the suggestion that copyleft is all that’s holding the whole Open Source process together just doesn’t square with the facts.

Was I just wrong?

Now, at this point, you might turn around and say: “But wholesale strip-ming is exactly what’s happening. We need even stronger protections if the commons is not to be ruthlessly exploited!"

One problem is that all the evidence suggests this doesn’t work. Permissive licenses like Apache, MIT, and BSD have gained in popularity over time. There’s a reason for this. Much of modern open source’s success isn’t about the ability to view source code. It’s about its collaborative development model. And the Eclipse Foundation's Ian Skerrett argues that "projects use a permissive license to get as many users and adopters, to encourage potential contributions. They aren't worried about trying to force anyone. You can't force anyone to contribute to your project; you can only limit your community through a restrictive license."

Another data point is the AGPL. At the time the new version of the copyleft GPL came out (GPLv3 in 2007), the Affero General Public License was introduced as a new GPL variant. Copyleft basically says that if you distribute software, you have to make the source code available. This includes any changes you made. The rub is that, under the GPL’s terms, “distributing” basically means shipping software on a disc or offering it for download. This creates what some saw as a loophole because offering the software as a cloud service isn’t distribution as traditionally defined.

(Is this starting to sound familiar? I told you none of this was new.)

Enter the AGPL, which was just like the GPL except the definition of distribution was broadened to include offering software as a service.

However, the AGPL hasn’t been much used. Ironically, one of the users was MongoDB, which is one of the current companies that have relicensed their software to prevent its use by cloud providers. In general, lots of companies are nervous that the AGPL could potentially interact with internal code that they don’t want to make publicly available. So its often on the license no fly list for application development.

Which is all to say that the overall direction in open source has been away from restrictive licenses. Leaving aside whether an even more restrictive license could still be reasonably considered “open source,” there just seems very little appetite for such a creature.

Words matter

In my view, a lot of the heat around licenses like the Commons Clause comes about because the companies involved seem to be, on the one hand, trying to gain the perceived value of a proprietary license while also getting credit for still being open source. “Open core” arguably plays the same parlor trick.

It still might have been news if one or more of these companies simply relicensed some or all of their software to a license that was unabashedly proprietary even if it retained some aspects of open source. But I suspect it would have been much less of a tempest.

Whether or not doing so would have been a good idea is a separate question. But it’s their software. Their business challenges are real. Own it. If you’re not going to have an open source development model, I’m not sure why you particularly even care if it’s technically open source or not.

Can we make cloud providers do better?

While the software vendors are taking heat from one side. Cloud providers are taking it from another. There is indeed a widespread view that most cloud providers are takers rather than givers. AWS, as the #1 cloud provider, takes particular heat. It’s mostly deserved. Although Adrian Cockcroft’s team has arguably moved the needle in making AWS play better with open source communities, much more could be done.

However, publicly shaming Amazon will not be a very effective strategy to drive change. If you can't sell the business value of participating in open source, you've pretty much lost the battle. Shaming might net you some contributions for the PR value but certainly no real commitment. Pinning your hopes on Jeff Bezos’ altruism is not a winning move.

Instead, as Linux Foundation Executive Director Jim Zemlin  told me during an interview at the Open Source Leadership Summit last year: 

The epiphany that many companies have had over the last three to four years, in particular, has been, "Wow. If I have processes where I can bring code in, modify it for my purposes, and then, most importantly, share those changes back, those changes will be maintained over time.

"When I build my next project or a product, I should say, that project will be in line with, in a much more effective way, the products that I'm building.

"To get the value, it's not just consumed, it is to share back and that there's not some moral obligation, although I would argue that that's also important. There's an actual incredibly large business benefit to that as well." The industry has gotten that, and that's a big change.

In closing

None of this is to dismiss the underlying challenges that these changes came in response too. There will always be challenges at the level of the individual company trying to build a business no matter what the product. But there are more macro dynamics here as well. 

This shift of computing towards public clouds recreates a new type of vertically integrated stack. One-time chief technology officer of Sun Microsystems, Greg Papadopoulos, one suspects hyperbolically and with an eye towards something IBM founder Thomas J. Watson probably never said, suggested that “the world only needs five computers,” which is to say there would be “more or less, five hyperscale, pan-global broadband computing services giants” each on the order of a Google.  

Some cloud giants have indeed made significant contributions to open source projects. For example, Google originally created Kubernetes, the leading open source project for managing software containers, based on the infrastructure it had built for its internal use. Facebook has open sourced both software and hardware projects.

But, for the most part, these dominant companies use open source to create what are largely proprietary platforms far more than they reinvest to perpetuate ongoing development in the commons. And they’re sufficiently large and well-resourced that they mostly don’t depend on cooperative invention at this point.

It’s easy to dismiss free-riding as a problem given that organizations are missing out in some ways if they do so. However, to the degree that large tech companies, both cloud providers and others such as Apple, take far more from the open source commons than they contribute back, this at least raises concerns about open source sustainability.

The last section is based in part on content from How Open Source Ate Software (Apress 2018)

Wednesday, January 02, 2019

2019 New Year updates

A new year. A few updates.

In 2018, for the second year in a row, I published a book. How Open Source Ate Software with Apress. Buy early, buy often as the saying goes. If I do anything along these lines in 2019, it will probably be a mini-book about the lessons we can take away from Intel’s Itanium processor that I followed closely while an analyst. I’ve written an outline and we’ll have to see if I get the energy to do something more.

In part because of my open source book, this blog has been a bit inactive of late. I’ve also been publishing in a number of other places, including, The Enterprisers Project, and TechTarget. I plan to continue doing so but I’m going to try to get back into the swing of jotting down quick thoughts here—related to both professional interests and otherwise. (I had planned to kick off a separate travel and food blog last year but I pretty much got no further than registering a domain. I’m just going to post here as the mood strikes.)

One decision I made over the holidays was that I’m going to pull the plug on my Cloudy Chat podcast. It’s gotten pretty irregular and unfocused and I don’t really see that changing. Furthermore, while my work focus remains fairly broad, it’s shifted towards emerging technology topics, which “cloud” really isn’t any longer even if some of its newer aspects are. This isn’t to say that I won’t do the occasional published interview. But I think a podcast implies a schedule and topic focus that I don’t really see being in the cards for the immediate future.

What may take its place is some more work with video. I haven’t fleshed out what that means exactly but it’s an idea I’ve noodled on over time. I’m hoping that taking “I really ought to do a new podcast” off my omnipresent mental to-dos will free up some cycles to create short educational videos related to emerging tech areas.

Contact and social media information is unchanged. I’m on twitter as @ghaff. My photos are on Flickr. (I’m hopeful for Flickr’s revitalization post-Yahoo.) I’m on LinkedIn but, if I don’t know you, you send a form invite, and our connection isn’t obvious to me, I’ll probably ignore it. I’m still on Facebook but I’m pretty selective about friend requests; if I ignore you don’t take it personally if you’re a casual and/or professional acquaintance.

I occasionally do product and book reviews. Feel free to inquire if you have something you think might interest me. But I’m don’t do a lot and I emphatically don’t write about things that I haven’t gotten hands-on with. I also sometimes do interviews at events. (See above re: podcasts however.) I mostly limit these to discussions about open source projects and other non-commercial topics. Otherwise, there’s just too much opportunity for conflicts of interest. 

I’m always open to speaking at/attending events on professional topics of interest. Some presentations are up on Slideshare. I do have a busy travel schedule though so I have to prioritize where I spend my time. (Somehow, the first few months of the year ended up especially crazy.)

To be done: Updating my overall website. It’s still a good source for links but I never really liked the design I used and I’ve held off on updating it recently as a result.