During the course of this discussion, we will review the change and resulting lessons learned and how those lessons may inform us in this cycle of change and the emergence of the cloud and cloud migrations.
Sociologists recognize that there are defined periods in our society where individuals and groups of people transcend traditional bounds and bring about major shifts in policy and structure.
These times are often hard-fought over sustained periods of time and are referred to as social change movements. During these times loosely connected members align and challenge the status quo aligned in search of a new and better order.
A parallel for this type of radical change for those of us in technology is often referred to as a “paradigm shift”. Unlike social movements, however, technology innovation can be rapid. Similarly, these shifts involve migration from the status quo or “old way” to the “new way” of doing things.
We frequently see introductions of new or redesigned technology. Consider for a moment that the smartphone most people carry in their pocket today is almost “a thousand times faster than the mids-”80’s Cray-2 Supercomputer, several multiples faster than the computer onboard NASA’s Perseverance Rover currently exploring Mars and – perhaps most significantly – faster than the laptops most of us are caring around today”.
The cycle of innovation and leapfrogging of advancements in the technology space is a modern marvel, however, for purposes of this conversation I would like to differentiate between the dizzying amount of incremental advancements and instead focus on these broader so-called “paradigm shifts”.
Investopedia defines the term paradigm shift as “a major change in the worldview, concepts, and practices of how something works or is accomplished. A paradigm shift can happen within a wide variety of contexts from scientific research to industry.”
A term in the technology space we hear associated and/or synonymous with a paradigm shift is “disruption”. This disruption can be innovation in technology and/or approaches to the application of technology. Along with new approaches to old problems, these disruptions often bring the promise to decrease cost and increase efficiency. The always elusive Win, Win!
This dual benefit is a nirvana for CIOs and their IT departments who are under constant pressure to both innovate and do more with less. This dual-threat value proposition is particularly attention-grabbing given IT shops are frequently seen as “cost centers”. An often derisive classification that refers to those lines of business that “create cost” vs “contribute to” the mission of the business. This representation feels like a major slight given that without the IT team and the technology solutions they provide most businesses can not operate, but I digress.
Finally a silver bullet!?
So inspired by the prospect of lower cost and the ability to do more with less there is no wonder why these technology disruptions catch fire and become the talk of boardrooms, IT shops, buzz word clickbait and front-page fodder for every CIO magazine, analyst and vendor looking to share the good word.
As a result there is a lot of excitement created and priorities that get reshuffled to take advantage of the potential benefits as companies dive headlong into these initiatives. In many cases good things do come to those who know how to execute and adopt change. However, in equally as many cases those who leap before they look end up disillusioned with shelfware or cost that far exceeds projections.
So with the benefit of hindsight let’s briefly consider a few of these technology disruptions we have seen in the past several decades and lessons learned.
Mainframe to Distributed Computing –
I joined the technology space in the late 90’s during some very tumultuous times. The headlines in public forums were dominated with the buzz about the internet and all of the companies that were minting millionaires based solely on “ideas” of how to harness the burgeoning internet and the pending year 2000 bug. This time was more commonly known as the Dot-com bubble and Y2K.
Dot-com bubble was about exploding market cap’s and going from idea to IPO as quickly as possible with anything that sounded even remotely like it could capitalize on the internet. Y2K was the preparation for a forthcoming year 2000 and a bug that could result from the two-digit year format that was used by early coders to minimize the use of computer memory, which many decades ago was an expensive line item. The fear being that the “00” would have a host of negative consequences ranging from bringing down our technology backbone to planes dropping from the sky and digitally wiping out people’s bank balances.
Another, slightly less calamitous, but yet buzz worthy conversation stirring in IT circles was what the rise and the mainstream adoption of the internet meant for distributed computing and the future of big iron. In its simplest form the thought was distributed computing being a “distributed” network of compute that was tied together such that the idle computing resources could be used delivering more efficiency, power and data to be distributed more efficiently.
This paradigm shift meant a turn away from the, then traditional or “legacy” mainframe. In layman’s terms the value proposition was more power, more nimble, more efficiency and less money?! Win, Win, Win and Win! This is a value proposition even late career CIO’s could get behind, actually not really, the primary hype was probably more driven by the always meddling LOB’s and network operations. That said distributed computing led to many declaring the mainframe was dead.
So with the benefit of hindsight, what were the lessons learned? Well many decades later while most of the 457 companies that had Initial Public Offering companies in 1999 do not exist, and we still have mainframes. In fact, while smaller businesses may drift away from mainframe technology, medium-sized and larger organizations have grown their mainframe footprint from 5 to 15% and 15 to 20%, respectively, according to a Gartner report. What’s more, as of July 2020 IBM reported a ~ 60% increase in mainframe sales in the last three quarters owing to the release of the z15 system.
But why? There are several factors, security being key among them, however similarly impactful was the ever-expanding and unplanned costs: cores, cooling, networking, security/RDBMS/OS software…that seemed to multiply and compound as the systems grew.
And what about Y2K? Many remember it was much to do about nothing. That said, there is a broader lesson surmised by those who have studied the event. They theorize “The Y2K crisis didn’t happen precisely because people started preparing for it over a decade in advance.” said Paul Saffo, a futurist and adjunct professor at Stanford University. The professor goes on to share a great quote “better to be an anonymous success than a public failure.”
Virtualization
Another “new” paradigm shift, ironically, core to mainframe computing dating back to the late 60’s early 70’s (Shout out to LPARs) was virtualization. The concept was dividing system resources and separating the logical from the physical machine.
This idea exploded on the scene in the mid-2000s when VMware introduced its ESX Server. The high-level benefits again are the ability to increase efficient use of resources, deliver better end-user reliability and you guessed it reduce costs. Win, Win, and Win!
Again, this became front-page news on CIOs and technology magazines everywhere and became a slam dunk for all CIOs. The opportunity to spend a bunch of money to save even more and thereby deliver cost savings was a dream come true. The question became not “if” but “when” and how fast you could adopt this new technology.
The result? Well, a lot of IT shops bought a lot of software and did in fact make meaningful strides toward efficiency, while technology vendors began minting million and even billionaires. In addition, it may not come as a surprise that cost savings come at a cost, and the resulting costs of licensing, particularly without a plan unchecked and uncontained, add up quickly. In the end, these costs began to erode and dilute the anticipated gains.
It feels as if maybe we are seeing a pattern…?
Cloud Computing and Cloud Migrations –
Similar to virtualization, the concept of cloud computing had been around for decades (Yet another shout-out to the mainframe). The term cloud or “cloud computing” in its current definition is believed to have “occurred on August 9, 2006, when then Google CEO Eric Schmidt introduced it to an industry conference. He was quoted as saying “What’s interesting [now] is that there is an emergent new model. I don’t think people have really understood how big this opportunity really is. It starts with the premise that the data services and architecture should be on servers. We call it cloud computing—they should be in a “cloud” somewhere.”
To say cloud computing was a big opportunity seems now like a massive understatement. Reports state that “worldwide end-user spending on public cloud services is to grow at 18.4% to total $304.9 billion in 2021”. And while there are many types and interpretations of “cloud”, the predominant paradigm shift was for private companies to manage, and maintain data and infrastructure on behalf of others. Of course, initially, the concept of turning over the management of an organization’s data to be run in the public domain was initially met with pushback ranging from the insult that anyone could run IT better or more secure to abject horror.
That said, today, every enterprise and industry is moving to the cloud. The cloud has become mainstream and the COVID pandemic only further accelerated cloud migrations. In fact, more than 90 percent of today’s enterprises have adopted cloud in some form. The cloud is no longer a place for early adopters.
Yet again, the value proposition of greater flexibility, agility, efficiency, and cost savings has flipped even the most ardent non-believers. Win, Win, Win, and Win? Quadfecta!?
Yet, again many organizations have benefited. However, similar to the past many enterprise efforts to adopt and scale to the cloud has slowed or stalled. Some organizations got stuck in an experimental mindset without a plan on where their cloud journey was headed. Others struggled to make a clear business case for scaling up their use of the cloud. “And nearly two-thirds have said they haven’t achieved the results expected of their cloud initiatives to date.” Regardless of “success” in nearly every case clients have exploded with no clear ability to budget.
What can be learned?
There are a host of considerations that can be gleaned and conventional wisdom applied from this short story. One is the simple fact that history does tend to repeat itself as well as the adage to look before you leap.
As is often the case, deep wisdom can be learned from our forefathers and it was the words of Benjamin Franklin who said – “If you fail to plan, you are planning to fail!” This simple quote stands the test of time and is virtually irrefutable but often goes unheeded. As I reflect on this insight I would not be at all surprised if one day we unearth the fact that old Ben was working on schematics for a mainframe.
Summary
If you or your organization is evaluating a cloud migration and/or looking for cloud cost optimization, CloudGenera is purpose-built for this challenge. The platform is vendor and venue neutral and an AI-powered decision engine that enables clients to plan, model, compare, and automate workload placement of their IT assets and provides an actionable, prioritized, and spend-optimized “plan” for action?
If you prefer a “plan” and something that helps measure and meet expectations we would love to speak with you.