Now that the IT industry has a decade of cloud computing experience under its belt, it’s clear that a one-size-fits-all cloud approach can’t guarantee the business outcomes enterprises want.
“Cloud-first” strategies involved businesses and government agencies moving workloads and applications wholesale into the public cloud from private data centers as quickly as they could. Their well-intentioned goals were to avoid capital expenditures, achieve the business agility afforded by on-demand IT resource availability, and gain automatic access to the latest tech advances.
With time, however, economic studies and enterprise budget reviews have revealed that while the public cloud operating model continues to solve those IT challenges for certain workloads and applications, it’s not a slam-dunk for all of them.
“If you go in with the one-size mentality, you’ll create more challenges than you had to start with,” said Bryan O’Neal, Director of Cloud Product Development at TierPoint, a managed service provider (MSP) in St. Louis that assists enterprises with cloud planning and optimization.
He said doing so was wrought with repercussions.
“Often there’s work to be done to move applications to the cloud that can add unexpected cost to the migration,” O’Neal said.
These efforts can result in extra expenses, both in cloud infrastructure and migration services that may be needed, that aren’t accounted for at the start of the project, he explained.
“We’ve had companies endeavor to lift and shift an application to the cloud and find that they’ve overlooked the need for expertise to refactor the application before moving it,” said O’Neal.
Sometimes app modifications are required for cloud interoperability.
“In many cases, the cloud can’t provide the standards for performance and end-user experience the company has set if an application isn’t refactored,” O’Neal observed.
“Businesses may find that the public cloud creates more slowdowns, bottlenecks and noise than they’re accustomed to in their existing model. They then have to back up and expand their scope.”
The Cloud-Smart Approach
Current thinking is that the public cloud remains the right choice for some workloads, particularly early in their lifecycles. But different applications have different requirements for accessibility, performance, security, regulatory compliance and business continuity, said O’Neal.
Even the cost implications of the cloud are shifting. Public cloud services excel at lowering upfront entry costs and leveling the playing field among companies of different sizes and budgets. Over time, though, cloud subscription expenditures tend to surpass on-premises support costs by two to three times, conservatively, as application behaviors grow more predictable.
With the more complete cloud picture coming into focus, it's becoming a best practice to be “cloud-smart,” a derivative of the “cloud-first” model. The former accounts for each application’s many variables and entails continual evaluation and optimization, said Sachin Chheda, who helps lead corporate strategy and strategic partnerships for Nutanix.
In other words, enterprises are being encouraged to adopt hybrid IT environments that match applications to the infrastructure best suited for them and, instead of “setting and forgetting” their decisions, regularly adjust what’s running where.
“Many jump to public cloud and overlook the in-between possibilities that are available to them,” agreed O’Neal.
Chheda said there are two ways an IT organization can design a cloud-smart infrastructure: start with the application or the infrastructure.
When starting with apps, he said to inventory the various attributes required by each application and then map that application to the specific infrastructure that best meets its requirements.
“This approach requires close collaboration between the infrastructure and application teams and significant project management investment at scale for an optimal outcome,” he said.
When starting with the infrastructure, create a list of infrastructure options and their respective attributes and costs. Then share the list with the IT teams responsible for supporting each application or service, who will make the match.
“This method may result in application teams independently seeking out alternatives if none of the options meet their needs and if collaboration with infrastructure specialists is lacking,” said Chheda.
He’s quick to point out that “there’s no right or wrong approach.” An organization should decide how to proceed based on their size, team dynamics, and IT applications and services, he advised.
“As with any IT initiative, disciplined program management, good communication between the various teams and a thorough understanding of the applications and the infrastructure options are necessary,” he said.
Understanding What Makes Apps Tick
These exercises can be outsourced to an MSP like TierPoint, as well. As part of TierPoint’s assessment, for example, the company establishes a taxonomy for an enterprise customer’s workloads, which involves understanding the behaviors and natures of the various apps. For example, there are differences among transactional and big-data analytics apps that can be virtualized and legacy apps written for older infrastructure models, which “interact with a certain OS and behave in a certain way that can create cloud barriers,” said O’Neal.
He added that common business applications, such as the Microsoft Office suite, have moved into cloud delivery models and make it “no-brainers to shift users to software-as-a-service. However, other critical services or workloads may be more complex.”
For example, O’Neal said, backend ERP and compute-intensive database workloads have high-performance requirements that could necessitate close geographic proximity for delivering end-user access without delays. He likens the setup to the concept of hot vs. cold storage, where cold storage handles queries when the frequency of transactions is low.
Growing Cloud Options, On-Prem and Off
Enterprises no longer face a binary choice between a legacy three-tier data center and a public cloud service. The available infrastructure configurations have grown much more plentiful – and complex, noted O'Neal. Enterprises are now challenged to accurately balance the use of public cloud offerings with a growing set of private infrastructure options:
- Private clouds deliver the resource elasticity and scalability benefits of public clouds in a single organization’s private data center.
- Traditional three-tiered data centers. While this option is waning in terms of new buildouts, enterprises often need to continue to support these environments for legacy business-critical applications not designed for the cloud.
- Private collocation services balance between building a dedicated data center and offloading IT to a public cloud. Businesses can rent the fundamentals – real estate, server racks, power/cooling, and possibly Internet connectivity – but provide and manage their own equipment and software or outsource that to a managed service provider (MSP).
- Private data centers built and managed by public cloud providers. Moving a public cloud provider’s infrastructure into a private enterprise data center reduces Internet-induced latency for better performance while letting the business enjoy the resource flexibility, tech refreshes, and cost benefits that the public cloud provides. This option also helps companies in highly regulated industries meet regulatory compliance requirements by allowing the local public infrastructure to be fully integrated with an enterprise’s security and governance mechanisms.
- Mobile pods are a recent development that addresses work-from-home requirements for geographic diversity and proximity for better performance.
“We see a lot of our customers looking beyond what I’d call the ‘provider edge,’” O’Neal said. “You can deploy private cloud in [distributed] properties that are small pods of self-contained, virtualized infrastructure.”
A pod could be a mobile rack or a small cabinet that contains compute, storage, and network resources, he explained. “The pod can be plugged into an edge location with minimal space and power requirements and connect up to a private or public cloud that IT can centrally manage.”
Organizing and Classifying Current Apps
A combination of any of the growing private, public, and hybrid cloud options might make sense for a given organization, said Chheda.
He advised inventorying existing apps and IT services with an eye toward categorizing them all as to whether they should be maintained, modernized, or retired. The categories might differ slightly from organization to organization, but basically he outlined the following options:
- Remain: continue running a legacy application as-is
- Retire: turn off an application or replace it with a new cloud-native application
- Rehost or replatform: move an application to the cloud as-is; also called “lift and shift”
- Refactor: modify an application to better support the cloud environment
- Rewrite or rebuild: modernize the application by rewriting it from scratch
Look at attributes of workloads as you do this, Chheda advised. Traditionally, cost has been the overriding factor. “But functionality, performance, resource requirements like CPU, memory, storage, and GPU, growth, and the change rate of the workload are also factors,” he said. “So are the upfront cost vs. ongoing cost, migration cost, and the costs associated with the application ‘R’ treatments listed above.”
Joanie Wexler is a contributing writer and editor with more than 25 years of experience covering the business implications of IT and computer networking technologies.
© 2021 Nutanix, Inc. All rights reserved. For additional legal information, please go here.