Previous container podcasts with Mark:
- Architecting for containers
- Containerized operating systems
- The layers of the containerization stack
- Docker, Kubernetes, and app packaging
Listen to OGG (0:16:52)
For my Cloudy Chat podcast series, I’ve been focusing lately on repeat guests drawn heavily from local Red Hat colleagues in Westford. I find it’s a great way to get interesting material out there without a whole lot of logistical overhead. Especially with all the activity going on with containers, docker, kubernetes, configuration management, and containerized operating systems like Project Atomic, there’s no shortage of things to cover without going too far afield.
I describe an earlier setup here. (See also how I use Google+ Hangouts for remote recording.) However, over time, I’ve experimented with some different setups for in-person recording to simplify the process while maintaining good quality. I’m pretty happy with where I’ve ended up—with the caveat that I’m always learning and tweaking things.
For recordings in the office:
In my earlier post, I describe recording using a laptop and a USB microphone. I’ve also done recordings using a Peavey PV6 USB Mixing Console and XLR dynamic microphones connected to a laptop. I still use the latter setup if there are more than two of us and/or I want to control the individual microphone levels. However, in the interests of simplicity, I now use a digital recorder connected to two dynamic microphones on desktop stands. Here’s the specific gear list:
You’ll probably also want a larger SD card (the recorder comes with a 2GB one), a mini-USB cable and power adaptor, and some spare AA batteries.
With this setup, you can just sit the recorder on the table, plug in the microphones, and sit one in front of yourself and one in front of your guest. I’m not going to go into every detail of the recorder but a few tips and tricks.
For recordings on the road.
While the above setup is relatively compact, it’s more than I really want to travel with most of the time. Furthermore, it requires that you be able to find a table in a relatively quiet area which is often far easier said than done at the conferences I attend. You can’t really use the Tascam as a handheld recorder with its internal mics. They’re just too sensitive and pickup the noise of you handling the recorder.
Instead, I use my iPhone or iPad and plug in a handheld iRig microphone. There's a corresponding iOS application but there's no reason you couldn't use any other recording application; the microphone just plugs into a standard 3.5mm jack. One nice detail of the iRig is that it comes with a splitter built into the jack. This means that you can easily monitor the recording with headphones, which can be useful if you're dealing with intermittent background noise.
I then just hold the microphone and move it up close to whoever is speaking at the moment. This generally works quite well for the style of interview podcasts that I do. I then transfer the recording to my laptop using whatever mechanism the recording app provides—in the case of iRig, I send it up to a server with FTP, then download it. I then edit the recording using Audacity in the usual way.
The same company also makes a small microphone that plugs directly into the jack of an iPhone. I don’t find handling the iPhone like a microphone quite as natural as handling a cylindrical microphone—but this mic lives in my accessory bag so it’s always with me in case an opportunity to make a recording pops up.
The Internet of Things (IoT) is hot. It’s also hard to get your head around given the proliferation of wildly different use cases, types of devices, interconnection mechanisms, and data patterns. What’s more, IoT is also intertwined with all the other big computing trends from clouds to data analysis to DevOps processes. In this session, Red Hat’s Gordon Haff will draw on a wide range of research, in addition to user examples, to help you structure your thinking about and approach to IoT as a technology enabler and a business opportunity. This discussion will include the types of platforms associated with IoT, connectivity characteristics, the intersection with data analytics and social, and a snapshot of current standards work.
Originally delivered for MD&M West, Anaheim, CA on 10 Feb 2015
Containers were initially pitched as more or less just another form of partitioning. A way to split large systems into smaller ones in which workloads not requiring a complete system by themselves could coexist without interfering with each other. Server/hardware virtualization is the most familiar form of partitioning today but, in its x86 form, it was only the latest in a long series of partitioning techniques initially applied mostly to mainframes and Unix servers.
The implementation details of these various approaches differed enormously and even within a single vendor—nay, within a single system design—multiple techniques hit different points along a continuum which mostly traded off flexibility against degree of isolation between workloads. For example, the HP Superdome had a form of physical partitioning using hardware, a more software-based partitioning approach, as well as a server virtualization variant for HP-UX on the system’s Itanium processors.
But, whatever their differences, these approaches didn’t really change much about how one used and interacted with the individual partitions. They were like the original pre-partitioned systems, There were just more of them and they were correspondingly smaller. Indeed that was sort of the point. Partitioning was fundamentally about efficiency and was logically just an extension of resource management approaches that allowed for the co-existence of multiple workloads historically .
At a financial industry luncheon discussion I attended last December, one of the participants coined a term that I promptly told him I was going to steal. And I did. That term was “skeuomorphic virtualization” which he used to describe hardware/server virtualization. Skeuomorphism is usually discussed in the context of industrial design. Wikipedia describes a skeuomorph as "a derivative object that retains ornamental design cues from structures that were necessary in the original.” The term has entered the popular lexicon because of the shift away from shadows and other references to the physical world such as leather-patterned icons in recent versions of Apple’s iOS.
However, the concept of skeuomorphism can be thought of as applying more broadly—to the idea that existing patterns and modes of interaction can be retained even though they’re not necessarily required for a new technology. In the case of “skeuomorphic virtualization,” a hypervisor abstracts the underlying hardware. While this abstraction was employed over time to enable new capabilities like live migration that were difficult and expensive to implement on bare metal, virtualized servers still largely look and feel like physical ones to their users. Large pools of virtualized servers do require new management software and techniques—think the VMware administrator role—but the fundamental units under management still have a lot in common with a physical box: independent operating system instances that are individually customizable and which are often relatively large and long-lived. Think of all the work that has gone into scaling up individual VMs in both proprietary virtualization and open source KVM/Red Hat Enterprise Virtualization.
In fact, I’ll go so far as to argue that the hardware virtualization approach largely won out over the alternatives of the time in c. 2000 because of skeuomorphism. Hardware virtualization let companies use their servers more efficiently by placing more workloads on each server. But it also let them continue to use whatever hodgepodge of operating system versions they were using and to continue to treat individual instances as unique “snowflake” servers if they so chose. The main OS virtualization (a.k.a. containers) alternative at the time—SWSoft’s Virtuozzo—wasn’t as good a match for highly heterogeneous enterprise environments because it required all the workloads on a server to run atop a single OS kernel. In other words, it imposed requirements that went beyond the typical datacenter reality of the day. (Lots more on that background.)
Today, however, as containers enjoy a new resurgence of interest, it would be a mistake to continue to treat this form of virtualization as essentially a different flavor of physical server. As my Red Hat colleague Mark Lamourine noted on a recent podcast:
One of the things I've hit so far, repeatedly, and I didn't really expect it at first because I'd already gotten myself immersed in this was that everybody's first response when they say, "Oh, we're going to move our application to containers," is that they're thinking of their application as the database, the Web server, the communications pieces, the storage.They're like, "Well, we'll take that and we'll put it all in one container because we're used to putting it all on one host or all in one virtual machine. That'll be the simplest way to start leveraging containers." In every case, it takes several days to a week or two for the people looking at it to suddenly realize that it's really important to start thinking about decomposition, to start thinking about their application as a set of components rather than as a unit.
In other words, modern containers can be thought of and approached as “fat containers” that are essentially a variant of legacy virtual machines. But it’s far more fruitful and useful to approach them as something fundamentally new and enabling that’s part and parcel of an environment including containerized operating systems, container packaging systems, container orchestration like Kubernetes, DevOps practices, microservices architectures, “cattle” workloads, software-defined everything, and pervasive open source as part of a new platform for cloud apps.