Making IT Infrastructure AI Ready

As enterprises move from learning to implementation, IT leaders are looking under the hood to see if their systems can manage AI applications and data.

By Ken Kaplan

By Ken Kaplan November 21, 2024

Many enterprises are well down the path, while others are still unsure about how they’ll use artificial intelligence to run their business. Either way, CIOs and IT teams have a lot of choices to make as AI continues to evolve at lightning speed. They feel that the train is leaving the station and they have to get on board, according to Sean Donahue, senior solutions manager at Nutanix.

“AI is not an option,” he told The Forecast. “It’s not a speculative market. Companies know they have to go AI – they just haven’t figured out what use cases they will tackle first.”

Donahue said that’s likely because many don't fully grasp how they could benefit from using AI. He likened it to when Thomas Edison introduced the light bulb in 1888.

“It was amazing to see at first sight and the demo struck awe, but people didn’t understand how to use electricity, especially since there was no infrastructure bringing electricity to their home.”

That’s how Donahue aptly summed up the challenge that CIOs and IT leaders are grappling with as they build future strategies for modernizing their infrastructures to align with the now-undeniable AI imperative.

Some say AI is rolling out faster than any other transformative IT technology, including the cloud, in recent history. BCC Research estimates the enterprise AI market will grow at a staggering 43.9% CAGR through 2028, propelled by drivers like wider AI adoption across verticals, heavier investment in emerging AI technologies, and the need to analyze large and complex datasets.

For many IT teams, a significant gap exists between recognizing the need to incorporate AI into their strategies – often driven by strong demand from the C-suite – and possessing the practical knowledge, skills, and resources to implement it effectively.

A major hurdle in this area is infrastructure. Even the most scalable cloud-based models often struggle to meet the intense demands of AI workloads, and the best-known cloud hyperscalers are hard-pressed to keep up with AI demands in traditional cloud environments.

“Cloud providers, including Amazon Web Services, Microsoft Azure and Google Cloud are under pressure to change that calculus to meet the computing demands of a major AI boom,” a recent The Wall Street Journal article reported.

“There’s a pretty big imbalance between demand and supply at the moment,” one AWS expert told The WSJ. 

RELATED

Seeing AI’s Impact on Enterprises

Another Dell expert added: “The existing economic models of primarily the public cloud environment weren’t really optimized for the kind of demand and activity level that we’re going to see as people move into these AI systems.”

To get their IT operations AI ready, rorward-thinking leaders are re-evaluating their whole IT ecosystem in order to build the right infrastructures for handling existing and future AI-powered functions.

Limitations of Traditional IT Infrastructures

Traditional IT infrastructures often not equipped to handle high-intensity AI requirements, like training large language models (LLMs) or processing high-volume, real-time data streams. Donahue used a practical car metaphor that demonstrates this challenge. 

“My 1949 car is not up to today's demands for performance,” he explained. “So am I happy driving it? Absolutely. But I know it's never going to compete on the highway. In fact, I shouldn't be running it on the highway because it's so outdated already.”

In other words, he said current infrastructure can maintain the status quo, sure. It’s performing just fine regarding its original design and capacity. But it won’t keep up competitively, especially as more benefits of AI come to light and demand for those benefits must be met with new IT resources.

RELATED

Building a GenAI App to Improve Customer Support

According to Donahue, AI-powered versions will be bigger and faster on the highway of enterprise IT infrastructures. They’ll drive more smoothly. Customers will want to buy them to access their newer features and capabilities. He said the cost of maintaining an old car –– or buying more storage and GPUs to handle continually increasing AI demands –– outweighs the benefits.

As newer car designs have evolved to meet higher safety standards, fuel efficiency, and performance, enterprise IT infrastructures must evolve to provide greater computational power, flexibility, and efficiency to handle AI applications.

Beyond scalability, efficiency, and capacity, public cloud environments pose inherent security risks with AI due to shared resources. Enhanced security measures and governance frameworks will be critical as enterprises seek to protect intellectual property and customer data within AI models.

“[Companies] can't just go and look up generic large language models built based on all the data in the world,” Donahue explained. “They have to tailor it to their data, industry, and intellectual property. And once you start inferencing your data into a Microsoft copilot or a public LLM, you could become part of the training model.”

After that happens, keeping track of how language models store, access, and use information can be vague and uncertain.

Addressing these issues means seeking more adaptable and secure infrastructure solutions to leverage AI strategically and securely, ensuring it supports current AI applications, keeps data and IP secure, and is agile enough to stay poised for future innovations and challenges.

AI for Infrastructure

IT leaders are at a crossroads as AI becomes a bigger force behind IT infrastructure and enterprise operations. In many cases, they’re faced with modernizing their infrastructures to support advanced AI applications without understanding their full implications.

Microsoft’s 2024 State of AI Report found that 75% of business leaders say AI is critical to their organization’s success, and only 15% fully implement and use it. Further, 99% said they’ve had challenges scaling and operationalizing AI.

RELATED

IT Leaders Get AI-Ready and Go

According to Donahue, IT teams are exploring three key elements: Choosing language models, leveraging AI from cloud services and building a hybrid multicloud operating model to get the best of on-premise and public cloud services.

First, they think about creating their own AI language models. After all, many believe this solves many security challenges of adopting a model on the public cloud. It may be theoretical, but according to Donahue, it’s more complicated.

“I think there were some misperceptions (myself included) back in late 2023 that we would just create our large language models and have it on our infrastructure,” Donahue said. 

“So building a large language model was misperceived as step number one. And I think we're finding that very, very, very few people will build their own language model.”

Using another car metaphor, Donahue said that building a language model in-house is like “building a car in the garage out of spare parts.”

Next, he said, companies look to cloud-based language models. This, however, is where the aforementioned security and governance concerns emerge most, along with cost control.

“I'm going to quickly learn that data, security, privacy and governance issues…questions such as ‘Where is the data?’” ‘Who has access to the data?’ and ‘Is it going to leak into the public large language model?’ arise.

“If those things don't scare me away from using it with my corporate IP and data, then I'm going to realize at the end of the month that I'm paying the hyperscalers because my AI inferencing application – that little query box that my employees use to ask questions – uses cloud GPUs, and those aren't cheap.”

RELATED

AI and Cloud Native Alchemize the Future of Enterprise IT

Moreover, this cost is often much higher than it’s forecasted to be, as employees often use AI tools without discretion to usage costs and become increasingly dependent on them over time.

This brings IT teams to a third step: thinking beyond cloud-based models and considering solutions designed intentionally and specifically to handle AI functionality. 

Donahue pointed to Nutanix’s GPT-in-a-Box, a comprehensive, pre-configured solutions that combine hardware and software to support the deployment of AI models directly on-premises, in the cloud or at the edge. This setup is designed to streamline the deployment and operation of GPT models by providing all the necessary components in a single, integrated package for integrating generative AI and AI/ML applications into IT infrastructures while keeping data and applications under IT team control.

The Benefits of Integrated AI Solutions

As organizations seek to harness the transformative power of artificial intelligence – and get their internal knowledge and skills up-to-speed – the complexity of deploying and managing AI systems often poses significant challenges.

RELATED

Living Workflows of AI at the Edge

Donahue explained that GPT-in-a-Box allows existing IT systems to streamline processes needed to onboard AI capabilities. It reduced the complexity of selecting compatible components, configuring software and optimizing performance.

He explained that by controlling the entire stack, including hardware, software, and AI layers, IT teams can implement robust security measures designed to safeguard AI environments, including data encryption, secure data access controls and intrusion detection systems. It also allows teams to manage performance by leveraging the optimal resources for efficiently accessing data in the right location.

Hybrid Multicloud As a Facilitator

According to Donahue, infrastructure must be at the center of the AI adoption strategy and there’s one cloud model poised to be particularly successful: hybrid multicloud. 

“Hybrid multicloud is where it's at,” Donahue explained. 

“AI just speaks to hybrid multicloud because your data sets will be everywhere. You will have to use a solution like unified storage to gather and manage them under one roof. They could be private cloud, public cloud, edge, et cetera.”

He explained that hybrid multicloud environments are particularly effective for AI applications due to their ability to integrate diverse computing resources and data storage solutions. They facilitate efficient data management and processing, pivotal for the performance of AI systems, particularly when handling extensive and varied data sets spread across multiple locations.

“So people who are using hybrid multicloud already will probably have an easier time getting started with their AI efforts.”

Prioritizing infrastructure modernization is essential. Embracing AI effectively demands that enterprises reassess and revitalize their underlying IT systems, focusing on the future and achieving the key scalability, capacity, efficiency, and analytical capabilities required to keep up in a fast-changing IT world.

Editor’s note: Learn more about the Nutanix Enterprise AI capabilities in this blog post and video coverage of its release in November 2024.

Ken Kaplan is Editor in Chief for The Forecast by Nutanix. Find him on X @kenekaplan.

Michael Brenner contributed to this story.

© 2024 Nutanix, Inc. All rights reserved. For additional legal information, please go here.

Related Articles

Nutanix President and CEO Rajiv Ramaswami on AI for Enterprise IT, recorded at .NEXT in Barcelona, Spain in May 2024.