When Workloads Are in the Wrong Place

Most data and application workloads will run in suboptimal locations over the next few years, caution analysts. Here’s how to avoid costly app-to-infrastructure mismatches.

By Joanie Wexler

By Joanie Wexler November 4, 2024

By some estimates, as many as 82% of enterprises have adopted hybrid multiclouds, which combine private and public cloud IT resources. Enterprises have deployed mixed-cloud infrastructures for a variety of purposes, such as the rapid ascent of artificial intelligence (AI), but the overriding reason may very well be happenstance.

In a podcast, David Linthicum, chief cloud strategy officer at Deloitte Consulting, told eWeek that many organizations are running multiple clouds by accident. 

“This has put them in situations they didn’t necessarily plan for. Multicloud is hard, complex, with lots of moving parts,” Linthicum said.

And that can leave cloud workloads unoptimized. Add in the recent, supersonic rise of artificial intelligence, and the IT puzzle only becomes more complicated.

What is Cloud Workload Optimization?

Cloud workload optimization is the process of using a variety of cloud resources to best match a company’s IT needs. The purpose is to achieve maximum efficiency by balancing costs, potential security concerns and IT performance demands, ultimately creating faster and more secure systems that can save company dollars.

Does Multicloud Help Avoid Provider Lock-in?

In companies that have implemented hybrid multicloud by design, a frequently mentioned motivation is a desire to run IT workloads as inexpensively as possible, which many believe requires avoiding lock-in with any one cloud provider.

RELATED

Redefining Workloads in Cloud Environments

Yet there’s a catch, according to Linthicum.

“Multicloud gives you choice, but it doesn’t necessarily solve the problem of vendor lock-in,” he said. “Once you leverage the features of a given provider’s service, you’re kind of stuck unless you want to rewrite or refactor those systems to be native on other cloud provider platforms.” That can be costly, time-consuming and risky.

He did acknowledge, however, the emergence of “promising AIOps tools…that can look at multiple clouds and deal with multiple services through abstraction and automation.” Those tools assist in liberating applications from cloud-native trappings, helping avoid lock-in.

Linthicum estimated that enterprises will soon "refocus on cross-cloud rather than optimizing for one cloud. That means [they’ll need] fewer people and a single base of talent” rather than multiple independent cloud-native skillsets.

Top Optimization Variables

Cost is an important variable to consider when optimizing workloads. Cost-optimizing the placement of an application, may involve using a public cloud service early on and later moving it to a more affordable private infrastructure, something that is currently unfolding with generative AI. 

“For example, when organizations are building a new application, they often prefer to rent out infrastructure from the [public] cloud, instead of building new capacity in their data center, to see how that application is received,” said Harsha Kotekela, director of product and solution marketing at Nutanix.

“Once it’s done and they see how the application is received, they might move it back on-prem depending on their existing capacity or data locality and other factors. Public cloud is good for flexibility, but it could cost more.”

Cost is just one optimization factor, however. Comprehensive cloud optimization requires continually balancing per-workload support costs with application performance and data sovereignty considerations, said Lee Caswell, senior vice president of product and solutions marketing at Nutanix. 

RELATED

Finding an Efficient Way to Hybrid Multicloud

Business continuity is another factor, he said. All of these increasingly require tools that can provide consistent experience across private and public clouds to provide visibility, alerts and integration.

Caswell explained that performance is largely driven by how long it takes to access data. Access can be accelerated using hyperconverged infrastructure (HCI), which collocates compute power and data storage for minimal latency, he said. He added that it also helps to adopt public cloud services with infrastructure in the country(ies) where enterprises serve employees and customers to reduce distance-induced delay.

Data sovereignty covers security, governance, and regulatory compliance. “It involves who owns my data, who can subpoena my data, and knowing exactly where my data is,” Caswell explained.

Persistent Workload Placement

Cost, performance, security/compliance, and business continuity optimization varies by application, industry standards, government regulation, and corporate philosophy. These components are also fluid: as time marches on, cloud service options, pricing, and regulations change. The dynamic nature of the digital economy, then, favors regular re-evaluation of what’s running where over “set-and-forget” hybrid multicloud management approaches.

RELATED

AI and Cloud Native Alchemize the Future of Enterprise IT

In fact, Gartner predicts that by 2027, 85% of workload placements made will no longer be optimal, because many enterprises using multiple cloud platforms will not have figured out how to stay on top of change management.

How can anyone find themselves in this situation?

Rather than waiting for user complaints about application performance or a compliance violation to rear its head, having a hybrid multicloud strategy can put IT teams ahead of the game, according to Kotekela. It allows them to inventory, assess and move applications based on cloud services, skill sets, service-level agreements (SLAs) and more on an ongoing basis.  

Cloud Workloads in the Age of AI

Artificial intelligence (AI) technology has recently experienced a meteoric rise, with its explosion on the scene transforming industries both big and small across the world economy. According to Cisco, 83% of executives believe that AI is a priority for their business today and Statistica estimates that the economic value of AI will grow twentyfold to nearly two trillion dollars by 2030. Today, more companies than ever before are using cloud-computing for their growing catalog of AI workloads, making optimization increasingly imperative to every company's bottom line.

“AI is a new net workload and it’s different in its very nature, so optimization should be viewed in a new light,” said Caswell. 

“Because of the scale at which these large language models are being developed, with up to a trillion parameters, we believe they will be developed in the public cloud where they will have access to GPUs and can spin up and down as the model is completed.” 

RELATED

IT Leaders Get AI-Ready and Go

It’s now easier than ever for enterprises to enter the AI space, but adeptly handling such an immensity of data still remains the primary challenge. While the public cloud is ideal for companies to tap into the large reservoir of power that’s required to train their models, they may also move their apps onto private data centers when certain information requires additional protection.

“The large language models are coming out of the public cloud, but now you can bring them onto your core data center where you can retrain it across different clouds,” said Caswell. 

“Now it’s in your private data center. Then they can be run out at the edge, which could be like a retail environment.”

RELATED

Building a GenAI App to Improve Customer Support

Once an AI application is refined, it may be relocated to the edge where new data can be regularly ingested quickly and most close to its source. 

“You take that model and you can run it at the edge, because the data is being generated at the edge, where it requires a lot less GPUs,” said Caswell. 

“The idea is that now that the model is using my private data, anytime that I tune the model, I’m protecting my private data.”

In this way, edge-computing saves crucial time and resources that would be wasted processing that same data further away from its originating source.

Steps to Cloud Optimization

Combining a hybrid multicloud blueprint with intelligent tools that simplify cross-cloud data integration, ease application mobility and alert you to cloud pricing changes helps avoid cloud-to-workload misalignment that can create unnecessary expenses, degrade performance and compromise security.

Below are five basic steps, culled from a variety of industry experts and reports, to keep applications and workloads optimized over time.

1. Create a cloud framework. Build a map of your existing private/public cloud environment. Include all the workloads you know about and can discover with cloud visibility tools, where they currently run, their performance requirements, any service-level agreements (SLAs) in place to support them, and alternative potential cloud placement options. Add an inventory of all the different cloud services you use, how you connect to them, if and how they connect to each other, and the departments that use each service. Document all the cloud skillsets available in the organization. This framework creates a foundation for managing cloud assets going forward.

2. Identify integration needs. With a bird's-eye view of the whole cloud environment, specify how individual clouds currently share data or are likely to do so in the future. What integration tools and technologies do you need to connect applications, systems, data repositories, and IT environments and enable the real-time exchange of data and processes? For example, Nutanix Cloud Clusters (NC2) allow on-premises IT environments to be replicated and run in cloud services, enabling hybrid multicloud integration. It replicates the Nutanix cloud platform enterprises use to build and manage their on-premises private clouds in public cloud environments. Natively integrated with public cloud providers, NC2 hides the differences and complexities of these platforms from IT operators using an abstraction layer that makes mixed Nutanix private clouds and public clouds appear as a consistent single environment. In this way, it enables application mobility across clouds without retooling, code changes or new skill sets, helping minimize cost and risk. It also enables consistent cloud management, security policy setting and enforcement, and cost optimization across the mixed hybrid multicloud using complementary applications that work on top of NC2.

3. Establish a platform-agnostic cloud-deployment automation strategy. Identify where most time is spent and what most needs automation. If plans call for using a wide variety of cloud services, it can be beneficial to create standardized rules for deploying different cloud environments that easily translate into automated configuration rules.

RELATED

Existing IT Hardware Gets New Path to Hybrid Multicloud

Once an AI application is refined, it may be relocated to the edge where new data can be regularly ingested quickly and most close to its source. 

“You take that model and you can run it at the edge, because the data is being generated at the edge, where it requires a lot less GPUs,” said Caswell. 

“The idea is that now that the model is using my private data, anytime that I tune the model, I’m protecting my private data.”

In this way, edge-computing saves crucial time and resources that would be wasted processing that same data further away from its originating source.

Steps to Cloud Optimization

Combining a hybrid multicloud blueprint with intelligent tools that simplify cross-cloud data integration, ease application mobility and alert you to cloud pricing changes helps avoid cloud-to-workload misalignment that can create unnecessary expenses, degrade performance and compromise security.

Below are five basic steps, culled from a variety of industry experts and reports, to keep applications and workloads optimized over time.

1. Create a cloud framework. Build a map of your existing private/public cloud environment. Include all the workloads you know about and can discover with cloud visibility tools, where they currently run, their performance requirements, any service-level agreements (SLAs) in place to support them, and alternative potential cloud placement options. Add an inventory of all the different cloud services you use, how you connect to them, if and how they connect to each other, and the departments that use each service. Document all the cloud skillsets available in the organization. This framework creates a foundation for managing cloud assets going forward.

2. Identify integration needs. With a bird's-eye view of the whole cloud environment, specify how individual clouds currently share data or are likely to do so in the future. What integration tools and technologies do you need to connect applications, systems, data repositories, and IT environments and enable the real-time exchange of data and processes? For example, Nutanix Cloud Clusters (NC2) allow on-premises IT environments to be replicated and run in cloud services, enabling hybrid multicloud integration. It replicates the Nutanix cloud platform enterprises use to build and manage their on-premises private clouds in public cloud environments. Natively integrated with public cloud providers, NC2 hides the differences and complexities of these platforms from IT operators using an abstraction layer that makes mixed Nutanix private clouds and public clouds appear as a consistent single environment. In this way, it enables application mobility across clouds without retooling, code changes or new skill sets, helping minimize cost and risk. It also enables consistent cloud management, security policy setting and enforcement, and cost optimization across the mixed hybrid multicloud using complementary applications that work on top of NC2.

3. Establish a platform-agnostic cloud-deployment automation strategy. Identify where most time is spent and what most needs automation. If plans call for using a wide variety of cloud services, it can be beneficial to create standardized rules for deploying different cloud environments that easily translate into automated configuration rules.

RELATED

Existing IT Hardware Gets New Path to Hybrid Multicloud

4. Evaluate IT resource requirements, especially networking. This includes processing power, data storage capacity, and the network infrastructure that connects all the cloud components. Consider, for example, cumulative latency that builds up among connections across multiple cloud infrastructures and the potential aggregate effect it might have on application performance. If it’s prohibitive to SLAs, consider direct cloud interconnection services between your private and public cloud(s), which bypass the public Internet to decrease latency and improve performance.

5. Deploy cross-cloud tools for optimizing cost, business continuity and staying ahead of security complexity. These are increasingly available from both Nutanix and third parties. NC2, for example, supports cost governance for automatically tracking costs across all private/public cloud instances and discovering more cost-effective options. Security managers such as Nutanix Flow Security Central provide you with a common dashboard for managing security across the whole hybrid multicloud without having to deal with each native-cloud security system separately. And native disaster recovery (DR) capabilities in NC2 support creating custom, per-application protection levels across multiple clouds and options for "elastic" and "hibernating" DR that support dynamic business continuity requirements while helping you manage costs.

Prepare for Growth

Businesses running hybrid multicloud environments may have some foundational strategies in place for managing them. As the number and variety of cloud services they use grow, they need to be prepared that their environments will become more complex.

RELATED

Seeing AI’s Impact on Enterprises

Look no further than how AI has transformed the IT landscape practically overnight. As Deloitte’s Linthicum noted in an article for Infoworld, complexity is a byproduct of heterogeneity, which in turn results from businesses wanting to provision best-of-breed cloud services.

While “a natural progression of the expansion of cloud computing,” he wrote, it results in most enterprises increasing the number of services they use, and more services naturally result in added cost.

Managing cost and complexity requires optimizing all workloads so that they meet business needs consistently and reliably over time at the best possible price point. Enterprises should plan ahead by creating a dynamic, evolving framework that takes advantage of abstraction and automation and the latest cross-cloud tools that transcend cloud-native platforms as they become available.

Editor’s note: Learn more about Nutanix technologies for hybrid multicloud.

This is an updated version of the article originally published on September 28, 2022 

Joanie Wexler is a contributing writer and editor with more than 25 years of experience covering the business implications of IT and computer networking technologies.

Chase Guttman and Ken Kaplan updated this story.

© 2024 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles