Wednesday, January 29, 2025

What we got wrong about the cloud

 Not everyone bought my comparison of how the cloud developed and AI is developing that I posted the other day. But I wanted to flesh out some thoughts about cloud from that post and my presentation at the Linux Foundation Member Summit in 2023.

First, some house keeping. When I write "we got wrong," I don't mean everyone and some—including myself—never fully bought into a number of the widely-believed assumptions. Furthermore, the aim of this post is not to belittle the important (and perhaps growing) role that public clouds play. Rather it's to chart how the cloud has evolved and why over about the past 20 years.

20 years is a convenient timeframe. That's about when Sun Microsystem started talking up Sun Grid. (Coincidentally, it's about when I was getting really established as an IT industry analyst and cloud matters fit neatly into the topics I was covering.) Amazon Web Services (AWS) would roll out its first three services in 2006. This is, in part, just a slice of history. But there are some lessons buried in the assumptions that were generally flawed.

The early narrative

Cloud computing would supposedly follow a trajectory similar to the distribution of electricity over a grid (this was before the deployment of solar power at any scale). As I wrote in CNET in 2009:

The vision of cloud computing, as originally broached by its popularizers, wasn't just about more loosely coupled applications being delivered over networks in more standardized and interoperable ways—a sort of next-generation service-oriented architecture, if you would. Rather, that vision was about a fundamental change to the economics of computing.

As recounted by, among others, Nicholas Carr in his The Big Switch, cloud computing metaphorically mirrors the evolution of power generation and distribution. Industrial-revolution factories—such as those that once occupied many of the riverside brick buildings I overlook from my Nashua, N.H., office—built largely customized systems to run looms and other automated tools, powered by water and other sources.

These power generation and distribution systems were a competitive differentiator; the more power you had, the more machines you could run, and the more you could produce for sale. Today, by contrast, power (in the form of electricity) is just a commodity for most companies—something that they pull off the grid and pay for based on how much they use.

The same article was titled "There is no 'Big Switch' for cloud computing." Go ahead and read it. But I'll summarize some of the key points here and add a few that became clear as cloud computing developed over time.

Economics

One of the supposedly strongest arguments for cloud computing was that of course it would be cheaper. Economies of scale and all that. Quoting myself again (hey, I'm lazy):

Some companies may indeed generate power in a small way [again, pre large-scale solar]—typically as backup in outages or as part of a co-generation setup—but you'll find little argument that mainstream power requirements are best met by the electric utility. The Big Switch argues that computing is on a similar trajectory.

And that posits cloud computing being a much more fundamentally disruptive economic model than a mostly gradual shift toward software being delivered as a service and IT being incrementally outsourced to larger IT organizations. It posits having the five "computers" (which is to say complexes of computers) in the world that Sun CTO Greg Papadopoulos hyperbolically referred to—or at least far, far fewer organizations doing computing than today.

It wasn't clear even at the time if, once you got to the size of a large datacenter, that economies of scale for the equipment in and operations of that datacenter were at that much of a disadvantage. And, over time, even if there are a variety of other reasons to use clouds—especially for predictable workloads and company trajectories—"the cloud is cheaper" often rings hollow. Claims of mass repatriation of applications running on-prem are probably overstated, but many organizations are being more careful about what they run where.

Computing as a utility

“Computing may someday be organized as a public utility just as the telephone system is a public utility,” Professor John McCarthy said at MIT’s centennial celebration in 1961. That, along with the economics, is what underpinned much of the early thinking about cloud computing.

Not only would computing delivered that way be cheaper (so the thinking went), but it would be a simple matter of selling units of undifferentiated compute and storage even if AWS's initial messaging service gave an initial hint that maybe it wasn't as simple as all that.

But a number of early projects had the implicit assumption that cloud was going to be a fairly simple set of services even if some specifics might differ from provider to provider. 

The Deltacloud API imagined abstracting away the differences of the APIs of individual cloud providers. Various management products imagined the (always mostly mythical) single-pane-of-glass management across multiple clouds. 

In reality though, the major cloud providers came out with an astonishing array of differentiated services. There are ways to provide some level of commonality by keeping things simple and by using certain third-party products. For example my former employer, Red Hat, sells the OpenShift application development platform, based on Kubernetes, that provides some level of portability between cloud providers and on-prem deployments. 

However, the world in which you had the same sort of transparency in switching cloud providers that you largely have with electricity never came to pass. Which brings us to...

Cloudbursting

The goal of cloud portability and interoperability largely got obscured by the chimera of cloudbursting, the dynamic movement of workloads from one cloud to another, including on-prem. As I wrote in 2011:

Cloudbursting debates are really about the dynamic shifting of workloads. Indeed, in their more fevered forms, they suggest movement of applications from one cloud to another in response to real-time market pricing. The reasoned response to this sort of vision is properly cool. Not because it isn't a reasonable rallying point on which to set our sights and even architect for, but because it's not a practical and credible near- or even mid-term objective.

Since I wrote that article, it's become even clearer that there are many obstacles to automagical cloud migration—not least of which are the laws of physics especially as data grows in an AI-driven world and, often, the egress charges associated with shipping that data from one cloud to someplace else.

While companies do sometimes move workloads (especially new ones) to public clouds or repatriate certain workloads, usually because they think they can save money, very few workloads are scurrying around from one cloud to another minute to minute, day to day, or even month to month.

The edge

 Dovetailing with some of the above is the concept of edge computing, i.e. computing that happens close to users and data.

Some aspects of edge computing are mostly a rebadging of remote and branch office (ROBO) computing environments. Think the computers in the back of a Home Depot big box store. But edge has also come into play with pervasive sensors associated with the Internet of Things (IoT), associated data, and AI operating on that data. Network limitations (and cost of transmitting data) imply filtering and analyzing data near to where it is collected much of the time—even if the original models are developed trained in some central location.

Essentially, edge is one more reason that the idea that everything will move to a cloud was a flawed concept.

Security

There was a lot of angst and discussion early on about cloud security. The reality is that security is almost certainly now seen as less of a concern but that some nuances of governance more broadly have probably increased in importance.

As for security in the more narrow sense, it's come to be generally seen that the major public clouds don't present much in the way of unique challenges. Your application security still matters. As do the usual matters of system access and so forth. You also need to have people who understand how to secure your applications and access in a public cloud environment which may be different from on-prem. But public clouds are not inherently less secure than on-prem systems connected to the public internet.

What has come to be seen as an issue, especially given geo-political conflicts, is where the data resides. While distribution of cloud computing centers to different regions was originally viewed as mostly a matter of redundancy and protecting against natural disasters and the like, increasingly it's about storing data and providing the services operating on that data in a particular legal and governmental jurisdiction. 

Conclusion

None of the above should be taken as a takedown of cloud computing. For the right use cases, public clouds have a lot going for them, some of which many saw early on. For example, companies starting out don't need to spend capital on racks of servers when, odds are, they may not need that many servers (or they may need more to handle workload spikes). Flexibility matters.

So does the ability to focus on your business idea rather than managing servers—though public clouds come with their own management complexities.

However, the development of cloud computing over the past 20 years is also a useful lesson. Certainly some technical innovations just don't work out. But others like cloud do—just not in many of the ways that we expect them to. Perhaps that's an obvious point. But still one worth remembering.


Why AI reminds me of cloud computing

 Even if you stipulated that cloud computing was going to be a big deal, the early cloud narrative got a lot of things wrong. 

To name just a few which I'll deal with in a subsequent post: Cloud wasn't a utility, security really wasn't the key differentiator versus on-premise, and cost savings weren't a slam dunk. Much deeper discussion for another day. Cloud computing was an important movement but the details of that movement were often unclear and a lot of people got a lot of those details wrong.

I posit that the same is the case with AI.

I'm pretty sure that, as someone who was in the industry through the second AI winter, I'd be foolish (probably) to paint AI as yet another passing fad. But I'm also pretty sure that any picture I paint of the five to ten year-out future is going to miss some important details.

Certainly, there's a lot of understandable enthusiasm (and some fear) around large language models (LLMs)). My take is that it's hard to dispute that there is some there there. Talking to ex-IBM exec Irving Wladawsky-Berger at the MIT Sloan CIO Symposium in 2023  we jumped straight to AI. To Irving, “There’s no question in my mind that what’s happening with AI now is the most exciting/transformative tech since the internet. But it takes a lot of additional investment, applications, and lots and lots of [other] stuff.” (Irving also led IBM’s internet strategy prior to Linux.) I agree.

But. And here's where the comparison to cloud comes in; the details of that evolution seem a bit fuzzy.

AI has a long history. The origin of the field is often dated to a 1956 summer symposium at Dartmouth College although antecedents go back to at least Alan Turing. 

It's been a bumpy ride. There have probably been at least two distinct AI winters as large investments in various technologies didn't produce commensurate value. The details are also a topic for another day. Where do we stand now?

AI today

The current phase of AI derives, to a large degree, from deep learning which, in turn, is largely based on deep neural networks (NNs) of increasing size (measured in # weights/parameters) trained on increasingly large datasets. There are ongoing efforts to downsize models because of the cost and energy consumption associated with training models but, suffice it to say, it's a resource-intensive process. 

Much of this ultimately derives from work done by Geoffrey Hinton in the 1980s on back propagation and NNs in the 1980s but it became much more interesting once plentiful storage, GPUS, and other specialized and fast computing components became available. Remember a 1980s computer was typically chugging along at a few MHz and disk drives were sized in the MBs.

The latest enthusiasm around deep learning in generative AI, of which large language models (LLM) are the most visible subcategory. One of the innovations here is that they can answer questions and solve problems in a way that doesn't require human-supervised labeling of all the data fed into the model training. A side effect of this is that the answers are sometimes nonsense. But many find LLMs an effective tool that's continually getting better.

Let's take AI as a starting point just as we could take cloud of 20 years ago as a starting point. What are some lessons we can apply?

Thoughts and Questions about AIs Evolution

I've been talking about some of these ideas for a while—before there were mutterings of another AI winter. For the record, I don't think that's going to happen, at least not at the scale prior winters. However, I do think we can safely say that things will veer off in directions we don't expect and most people aren't predicting.

One thing I have some confidence in reflects a line from Russell and Norvig's AI textbook, which predates LLMs but I think still applies. “We can report steady progress. All the way to the top of the tree,” they wrote. 

The context of this quote is that the remarkable advance of AI over maybe the last 15 years has been largely the result of neural networks and hardware that's sufficiently powerful to train and run models that are large enough to be useful. That's Russell and Norvig's tree.

However, AI is a broader field especially when you consider that it is closely related to and, arguably, intertwined with Cognitive Science. This latter field got its start at a different event a few months after the Dartmouth College AI conference, which is often taken the mark the birth of AI—though the "Cognitive Science" moniker came later. Cognitive Science concerns itself with matters like how people think, how children learn, linguistics, reasoning, and so forth. 

What’s the computational basis for learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, and developing physical world intuitions? 

In other words, questions that are largely divorced from commercial AI today for the simple reason that studies of these fields have historically struggled to make clear progress and certainly to produce commercially interesting results. But many of us strongly suspect that they ultimately will have to become part of the AI story.

There are also questions related to LLMs. 

How genuinely useful will they be—and in what domains—given that they can output nonsense (hallucinations)? Related are a variety of bias and explainability questions. I observe that the reaction to LLMs on tech forums differ considerably with some claiming huge productivity improvements and others mostly giving a shrug. Personally, my observation with writing text is that they do a decent job of spitting out largely boilerplate introductory text and definitions of terms and thereby can save some time. But they're not useful today for more creative content.

Of course, what LLMs can do effectively has implications for the labor market as a paper by MIT economist David Autor and co-authors Levy and Murnane argues.

Autor’s basic argument is as follows. Expertise is what makes labor valuable in a market economy. That expertise must have market value and be scarce but non-expert work, in general, pays poorly.

With that context, Autor classifies three eras of demand for expertise. The industrial revolution first displaced artisanal expertise with mass production. But as the industry advanced it demanded mass expertise. Then the computer revolution started, really going back to the Jacquard loom. The computer is a symbolic processor and it carries out tasks efficiently—but only those that can be codified. 

Which brings us to the AI revolution. Artificially intelligent computers can do things we can’t codify. And they know more than they can tell us. Autor asks ”Will AI complement or commodify expertise? The promise is enabling less expert workers to do more expert tasks”—though Autor has also argued that policy plays an important role. As he told NPR: “[We need] the right policies to prepare and assist Americans to succeed in this new AI economy, we could make a wider array of workers much better at a whole range of jobs, lowering barriers to entry and creating new opportunities.” 

The final wild card that could have significant implications for LLMs (and generative AI more broadly) revolves around various legal questions. The most central one is whether LLMs are violating copyright by training on public but copyrighted content like web pages and books. (This is still an issue with open source software which generally still requires attribution in some form. There are a variety of other open source-related concerns as well such as whether the training data is open.) 

Court decisions that limit the access of LLMs to copyrighted material would have significant implications. IP lawyers I know are skeptical that things would go this way but lawsuits have been filed and some people feel strongly that most LLMs are effectively stealing.

We Will Be Surprised

When I gave a presentation at the Linux Foundation Member Summit in 2023 in which I tried to answer what the next decade will bring for computing, AI was on the technologies list of course and I talked about some of the things I've discussed in this post. But the big takeaway I tried to leave attendees with was that the details are hard to predict.

After all, LLMs weren't part of the AI conversation until a couple years ago; ChatGPT's first public release was just in 2022. Many were confident that their pre-teens wouldn't need to learn to drive even if some skeptics like MIT's John Leonard were saying they didn't expect autonomous driving to come in his lifetime. Certainly, there's progress—probably most notably by Waymo's taxi service in a few locations. But it's hard to see the autonomous equivalent of Uber/Lyft's ubiquity anytime soon. Much less assistive driving systems that are a trim option when you buy a car. (Tesla's full self-driving doesn't really count. You still need to pay attention and be ready to take over.)