Wednesday, May 24, 2023

AI is looking summer-y

 

It never got to the point where the whispers about an impending AI winter got that commonplace, loud, or confident. However, as widespread commercialization of some of the most prominent AI applications—think autonomous vehicles—slipped well past earlier projections, doubts were inevitable. At the same time, the level of commercial investment relative to past AI winters made betting against it wholesale seem like a poor bet.


It’s the technology in the moment’s spotlight. On May 23, it was foundational to products announced at Red Hat Summit in Boston such as Ansible Lightspeed. However, the surprise today would be were AI not to have a prominent position at a technology vendor’s show.

But, as a way to get a perspective that’s less of a pure technologist take, consider the prior week’s MIT Sloan CIO Symposium Driving Resilience in a Turbulent World held in Cambridge MA. This event tends to take a higher-level view of the world, albeit one flavored by technology. Panels this year about how the CIO has increasingly evolved to a chief regulation officer, chief resilience officer, and chief transformation officer are typical of the sort of lenses this event uses to examine the current important trends for IT decision makers. As most large organizations become technology companies—and software companies in particular—it’s up to the CIO to partner with the rest of the C-suite to help chart strategy in the face of changing technological forces.And that means considering tech in the context of other forces—and concerns. For example, supply chain optimization is a broad company business challenge even if it needs technology as part of the puzzle.


AI rears its head


But even if AI was a relatively modest part of the agenda on paper, mostly in the afternoon, everyone was talking about it to a greater or lesser degree.


For example, Tom Peck, Executive Vice President & Chief Information Officer and Digital Officer, Sysco said that they were still “having trouble finding a SKU of AI in the store. We’re trying to figure out how to pluck AI and apply it to our business. Bullish on it but still trying to figure out build vs. buy.”


If I were to summarize the overall attitude towards AI at the event, it was something like: really interesting, really early, and we’re mostly just starting to figure out the best ways to get business value from it.


A discussion with Irving Wladawsky-Berger


I’ve known Irving Wladawsky-Berger since the early 2000s when he was running IBM’s Linux Initiative; he’s now a Research Affiliate at MIT’s Sloan School of Management, a Fellow of MIT’s Initiative on the Digital Economy and of MIT Connection Science, and Adjunct  Professor  at the Imperial College Business School. He’s written a fair bit on AI; I encourage you to check out his long-running blog.


There were lots of things on the list to talk about. But we jumped straight to AI. It was that sort of day. To Irving, “There’s no question in my mind that what’s happening with AI now is the most exciting/transformative tech since the internet. But it takes a lot of additional investment, applications, and lots and lots of [other] stuff.” (Irving also led IBM’s internet strategy prior to Linux.)


At the same time, Irving warns that major effects will probably not be seen overnight. “It’s very important to realize that many things will take years of development if not decades. I’m really excited about the generative AI opportunity but [the technology is] only about 3 years old,” he told me. 


We also discussed The Economist’s How to Worry Wisely about AI issue, especially an excellent essay by Ludwig Siegele titled “How AI could change computing, culture and history.” One particularly thought provoking statement from that essay is “For a sense of what may be on the way, consider three possible analogues, or precursors: the browser, the printing press and practice of psychoanalysis. One changed computers and the economy, one changed how people gained access and related to knowledge, and one changed how people understood themselves.”


Psychoanalysis? Freud? It’s easy to see the role the browser and the printing press have had as world-changing inventions. He goes on to write: “Freud takes as his starting point the idea that uncanniness stems from ‘doubts [as to] whether an apparently animate being is really alive; or conversely, whether a lifeless object might not be in fact animate’. They are the sort of doubts that those thinking about llms [Large Language Models] are hard put to avoid.”


This in turn led to more thought-provoking conversation about linguistic processing, how babies learn, and emergent behaviors (“a bad thing and a bug that has nothing to do with intelligence”). Irving concluded by saying “We shouldn’t stop research on this stuff because it’s the only way to make it better. It’s super complex engineering but it’s engineering. It’s wonderful. I think it will happen but stay tuned.”


The economics


“The Impact of AI on Jobs and the Economy” closed out the day with a keynote by David Autor, Professor of Economics, MIT. 


If you want to dive into an academic paper on the topic, here’s the paper by Autor and co-authors Levy and Murnane


However, Autor’s basic argument is as follows. Expertise is what makes labor valuable in a market economy. That expertise must have market value and be scarce but non-expert work, in general, pays poorly.


With that context, Autor classifies three eras of demand for expertise. The industrial revolution first displaced artisanal expertise with mass production. But as the industry advanced it demanded mass expertise. Then the computer revolution started, really going back to the Jacquard loom. The computer is a symbolic processor and it carries out tasks efficiently—but only those that can be codified. 


Which brings us to the AI revolution. Artificially intelligent computers can do things we can’t codify. And they know more than they can tell us. The question Autor posits is ”Will AI complement or commodify expertise? The promise is enabling less expert workers to do more expert tasks”—though Autor has also argued that policy plays an important role. As he told NPR in early May: “[We need] the right policies to prepare and assist Americans to succeed in this new AI economy, we could make a wider array of workers much better at a whole range of jobs, lowering barriers to entry and creating new opportunities.” 

Sunday, April 23, 2023

Kubecon: From contributors to AI

 I find that large industry shows like KubeCon + CloudNativeCon (henceforth referred to as just KubeCon for short) are often at least as useful for plugging into the overall zeitgeist of the market landscape and observing the trajectory of various trends as they are for diving deep on any single technology. This event, held in late April in Amsterdam was no exception. Here are a few things that I found particularly noteworthy; they may help inform your IT planning.


Contributors! Contributors! Contributors!


Consider first who attended. With about 10,000 in-person registrations it was the largest KubeCon Europe ever. Another 2,000 never made it off the waiting list. Especially if you factor in tight travel budgets at many tech companies, it’s an impressive number by any measure. By comparison, last year’s edition in Valencia had 7,000 in-person attendees; hesitancy to attend physical events has clearly waned.


Some other numbers. There are now 159 projects within the Cloud Native Computing Foundation (CNCF) which puts on this event; the CNCF is under the broader Linux Foundation umbrella. It started with one, Kubernetes, and even as relatively recently as 2017 had just seven. This highlights how the cloud native ecosystem has become about so much more than Kubernetes. (It also indirectly suggests that a lot of complaints about Kubernetes complexity are really complaints about the complexity of trying to implement cloud-native platforms from scratch. Hence, the popularity of commercial Kubernetes-based platforms that do a lot of the heavy lifting with respect to curation and integration.)


Perhaps the most striking stat of all though was the percentage of first-timers at Kubecon: 58%. Even allowing for KubeCon’s growth, that’s a great indicator of new people coming into the cloud-native ecosystem. So all’s good, right?


Mostly. I’d note that the theme of the conference was “Communities in Bloom.” (The conference took place with tulips in bloom around Amsterdam.) VMware’s Dawn Foster and Apple’s Emily Fox also gave keynotes on building a sustainable contributor base and saving knowledge as people transition out of a project respectively. This all has a common theme. New faces are great but having a torrent of new faces can stress maintainers and various support systems. The torrent needs to be channeled.


Liz Rice, Chief Open Source Officer at Isovalent and Emeritus chair of the Technical Oversight Committee put it to me this way. The deliberate focus on community at this KubeCon doesn’t indicate a crisis by any means. But the growth of the CNCF ecosystem and the corresponding level of activity is something to be monitored and perhaps some steps taken in response.


It’s about the platform


The rise of the platform engineer and the “platform” term generally has really come into the spotlight over the past couple of years. Panelists on the media panel about platform engineering described platforms as having characteristics  such as documentable, secure, able to connect to different systems like authentication, incorporating debuggability and observability, and perhaps most of all, flexibility. 


From my perspective, platform engineering hasn’t replaced DevOps as a concept but it’s mostly a more appropriate term in the context of Kubernetes and the many products and projects surrounding it. DevOps started out as something that was as much about culture as technology; at least the popular shorthand was that of breaking down the wall between developers and operations. While communicating across silos is (mostly) a good thing, at scale, operations mostly provisions a platform for developers — perhaps incorporating domain-specific touches relevant to the business — and then largely gets out of the way. Site Reliability Engineers (SRE) shoulder much of the responsibility for keeping the platform running rather than sharing that responsibility with developers. The concept isn’t new but “DevOps” historically got used for both breaking down walls between the two groups and creating an abstraction that allowed the two groups to largely act autonomously. Platform engineering is essentially co-opting the latter meaning.


The latest abstraction that we’re just starting to see is the Internal Developer Platform (IDP) — such as the open source Backstage that came out of Spotify. “Freedom with guardrails” is how one panelist described the concept. An IDP provides developers with all the tools they need under a governing IT governance umbrella; this can create a better experience for developers by presenting them with an out-of-the-box experience that includes everything they need to start developing. It’s a win for IT too. It cuts onboarding time and means that development organizations across the company can use the same tools, have access to the same documentation, and adhere to the same standards.


Evolving security (a bit)


Last fall, security was pervasive at pretty much every IT industry event I attended, including KubeCon North America in Detroit. It featured in many keynotes. Security vendor booths were omnipresent on the show floor.


It’s hard to quantify the security presence at this KubeCon by comparison. To be clear, security was well-represented both in terms of booths and breakouts. And security is so part and parcel of both platforms and technology discussions generally that I’m not sure if it would even be possible to quantify how much security was present.


However, after making myself a nuisance with several security vendors on the show floor, I’ll offer the following assessment. Security is as hot a topic as ever but the DevSecOps and supply chain security messages are getting out there after a somewhat slow start. So there may be less need to bang the drum quite so loudly. One security vendor also suggested that there may be more of a focus on assessing overall application risk rather than making security quite so much about shifting certain specific security processes earlier in the life cycle. Continuous post-deployment monitoring and remediation of the application as a whole is at least as important. (They also observed that the biggest security focus remains in regulated industries such as financial services.)


An AI revolution?


What of the topic of the moment — Large Language Models (LLM) and generative AI more broadly? These technologies were even the featured topic of The Economist weekly magazine that I read on my way back to the US from Europe.


The short answer is that they were an undercurrent but not a theme of the event. I had a number of hallway track discussions about the state of AI but the advances, which are hard to ignore or completely dismiss even for the most cynical, have happened so quickly that there simply hasn’t been time to plug into something like the cloud-native ecosystem. That will surely change.


It did crop up in some specific contexts. For example, in the What’s Next in Cloud Native panel, there was an observation that Day 2 operations (i.e. after deployment) are endlessly complex. AI could be a partial answer to having a more rapid response to the detection of anomalies. (To my earlier point about security not being an island relative to other technologies and processes.) AIOps is already an area of rapid research and product development, but there’s the potential for much more. And indeed, a necessity, as attackers will certainly make use of these technologies as well.