Business

Enterprise AI Reality Check: Implementing Practical Solutions

As enterprise AI kicks into gear, IT teams need to optimize infrastructure, control costs and deliver measurable business outcomes in this interview with Induprakas Keri, senior vice president and general manager for hybrid multicloud at Nutanix.

March 7, 2025

A year ago, business leaders who were deploying artificial intelligence tools across their organizations had “stars in their eyes.” That’s according to Induprakas Keri, senior vice president and general manager for hybrid multicloud at Nutanix.

“People thought they could use generative AI to replace humans and automatically cut their costs by X percent,” Keri told The Forecast. 

“Today, I think there’s much more sobriety. The amount of experimentation has surpassed anything that I’d imagined, but in terms of business outcomes, it’s still early.”

Keri sees businesses already benefitting from enterprise AI applications, including in areas like cybersecurity fraud prevention. But in the year ahead, he said, organizations will begin to move past the experimentation phase, taking a clear-eyed approach that balances the endless potential of AI with grounded concerns about infrastructure costs, data privacy and accuracy.

“Businesses need to shorten their time to outcome and see which use cases deliver a return on their investments,” he said. “Then, they can scale up.”

AI Demands Hybrid Infrastructure

Keri noted that there are three distinct phases to building out an AI solution: creating a foundational model; training and refining that model; and using the model for inferencing. 

“Those three phases happen to run on three very different infrastructures,” Keri said. 

“The only place on the planet where you have enough computing power to create a foundational model is a public cloud. But many organizations are going to want to train those models on proprietary data in their on-premises private clouds. And you want to inference close to the edge, because of latency. You can’t afford to be religious about where your workloads are running. You need to be hybrid.”

Organizations Must Optimize Energy Use

Keri compared the creation of an AI model to the creation of a new language and then compared the actual use of a new model to the ongoing, thousands-of-times-per-day way we all use words. 

“If you can save 30 percent on your energy, you’re going to see that savings not just once, but millions of times,” he said. 

“An AI search can use a hundred times more energy than traditional search, and we’re already spending a huge amount of energy on search. Imagine multiplying that by a hundred.” 

To limit energy waste, Keri said, organizations must invest in infrastructure solutions that allow them to bring AI applications to where their data resides – rather than constantly moving the data itself.

Data Privacy Is Deceptively Complex

In addition to efficiency concerns, Keri said, many organizations opt to train their AI models on-premises to avoid sending sensitive data to the public cloud. 

“If you train a public foundational model with your proprietary data in the public cloud, you’re basically given up ownership and control of the data,” he said. “It’s now baked into that model, and you can’t do much about it.” 

Even when information remains on-premises, Keri explained, organizations must be careful not to cross-contaminate their data by combining it all in one central AI model. For instance, inferences utilizing customer data should be limited to users who are authorized to access that data (something that is difficult to enforce if multiple teams are all using the same AI solutions).

Human Oversight Is Still Needed

Persistent “hallucinations” (the phenomenon where AI tools invent wrong answers when they don’t know the right ones) have meant that organizations have not yet been able to trust AI solutions to fully take over most tasks, Keri said. Still, he stressed that businesses can get value out of using these tools with human supervision. 

“One analogy that I use is LASIK,” he said. “You would never trust a human to wield a laser to drill into your eyes. At the same time, you would not allow a machine to drill into your eyes without a human being present.”

Micro Models May Lead the Way Forward

The latest AI hype wave has largely been carried by monolithic large language models (LLMs) like ChatGPT and Claude, which are trained using countless books, news articles, scientific papers, public webpages, and Wikipedia entries. This breadth means the tools can do everything from drafting a go-to-market strategy for a new software product to writing a humorous limerick about your dog. But it also means that they are essentially jacks of all trades, and masters of none. 

Keri noted that organizations may begin to adopt GPT solutions trained on much smaller datasets, which could limit problems with inaccuracy and data privacy. In fact, Nutanix is already using internal GPT solutions for sales and customer support.

RELATED Building a GenAI App to Improve Customer Support
While creating GPT-in-a-Box software to help IT infrastructure teams scale out their AI capabilities, Nutanix developed its own GenAI app for system reliability engineers, an example of how enterprises create business value using AI.

September 5, 2024

Using application programming interfaces (APIs), organizations can then get these specialized tools to work in tandem to solve more complex problems in much the same way that human specialists work together on larger projects. 

“If you have an underlying infrastructure that is API-driven, then your AI journey will be much more automated and much more painless,” Keri said. “That’s an area where our infrastructure shines, and where our platform is going to really help our customers.”

Looking ahead, Keri stressed the importance of acknowledging the limitations of AI tools, which, in turn, can reveal the best ways for humans to use them. 

“AI is exceptional for predicting average behavior, and for detecting a prevalent pattern of behaviors,” he said. “But the math is often different from how humans reason. Unless you have a model of the real world, you’re always going to fall short. For now, at least, that real-world modeling still requires people.”

Calvin Hennick is a contributing writer. His work appears in BizTech, Engineering Inc., The Boston Globe Magazine and elsewhere. He is also the author of Once More to the Rodeo: A Memoir. Follow him @CalvinHennick.

© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles