Managing AI at the Edge

A recent report explores the benefits of using artificial intelligence where data is generated and provides a framework for building “AI at the Edge” strategies.

By Joanie Wexler

By Joanie Wexler July 16, 2024

Beyond data centers, artificial intelligence (AI) and edge computing are joining forces to drive a new wave of digital innovation.

Edge computing has been developing for years as a data center extension that moves processing closer to the source of data for faster response times and, often, improved economics. Now, compute-intensive, distributed AI applications, according to experts, promise to drive double-digit growth in edge deployments over the next few years.

IDC, for example, expects that edge computing spending could reach $350 billion by 2027, fueled largely by AI deployments.

“To meet [AI] scalability and performance requirements, organizations will need to adopt the distributed approach to architecture that edge computing provides,” Dave McCarthy, IDC research vice president of cloud and edge services, said in a March 2024 TechRadar Pro article.

In an interview with The Forecast, Steve McDowell, chief analyst and founder at NAND Research, summed up the trend: “The reason we push AI to the edge is because that's where the data is.”

McDowell recently published Taming the AI-enabled Edge with HCI-based Cloud Architectures, a report commissioned by hybrid multicloud software company, Nutanix. The report explores the impact of extending IT resources to the edge and the driving force of AI, particularly in areas like image recognition for retail, manufacturing and other industries.

Strategic Imperative

Deployed at the edge, AI algorithms enable data to be processed locally in mere milliseconds, delivering real-time feedback and operations that allow businesses to fully cash in on big data. AI-driven distributed applications include smart homes, factories, and cities; customer personalization, automated inventory, and self-checkout in retail environments; public safety security camera and computer vision use cases; healthcare monitoring, and autonomous vehicles, to name a few.

NAND Research chief analyst Steve McDowell talks with Greg White, senior director, strategic marketing. Source: NAND Research.

Running AI-driven applications at the edge will usually require bolstering compute resources in these sites, according to McDowell. As an example, he pointed to a smart shelf use case he saw demonstrated at the National Retail Federation 2024 show in January that requires an AI- and-compute fortified edge.

“Intelligent cameras are positioned all up and down the aisles of the grocery store. The only job these cameras have is to monitor the inventory on the shelves they can ‘see,’” he explained. In this scenario, when an item has sold out, leaving a gap on the shelf, it triggers an alert for the shelf to be restocked.

“That's data-intensive processing that you don't want to send to the cloud, necessarily,” he said, because of the response time latency and network costs that doing so incurs.

Related

IT Leaders Get AI-Ready and Go

“Now, do I need a GPU? Not necessarily,” said McDowell, referring to high-end computer chips able to perform the same operation on multiple data values in parallel. 

“But there is a need for some kind of special processor. There's a whole range of application-specific inference engines [emerging] for these kinds of edge applications around computer vision and natural language processing” from suppliers such as Qualcomm, AMD, Intel, and several startups, he said.

Evolutionary Path

Today’s edge architectures can range in size from a single, smart device or small set of decentralized servers to a microcosm of a full-blown data center. Edge infrastructure often interworks with centralized cloud resources, moving processing to the appropriate location based on the type of application operation and latency requirements.

Related

The Future of AI Computing Resides at the Edge

According to McDowell, the AI edge differs from traditional edge deployments by requiring greater compute cycles and data management, as mentioned, as well as in software lifecycle maintenance and security requirements.

“Traditional edge computing involves things like point-of-sale systems in retail,” McDowell explained. “Once we start putting AI in, then suddenly we have processing requirements that can require AI accelerators.”

The need for GPUs at the edge becomes mandatory “when I start doing things like generative AI,” he said. This involves automating the creation of text, images, videos, or other data using machine learning-trained large language models (LLMs), often in response to prompts.

From a software perspective, “Ten years ago, the edge was largely about embedded systems or compute systems that we treated as embedded, which means [a software configuration that’s] fairly locked down and doesn't get updated very often,” McDowell observed. “AI, on the other hand, creates more of a living workflow” that requires regular attention. 

“If I'm doing image processing for manufacturing or quality assurance, I want to update those models continuously to make sure I've got the latest and greatest,” McDowell explained. So, I need the ability to manage the software's lifecycle.”

And when it comes to security, “Now I don't have to just worry about logical security or network security. I also have to worry about physical security,” said McDowell. “So it's a whole different way of thinking from a traditional data center mindset.”

At the same time, McDowell noted in the Taming the AI-enabled Edge report that processing data locally precludes sensitive information from continually being sent back and forth to a central server, reducing exposure to potential data breaches.

Related

Seeing AI’s Impact on Enterprises

Similarly, computer scientist Dr. Douglas Comer pointed out in a 2023 Forecast article that edge computing increases the overall enterprise attack surface. This is particularly true for the physical security McDowell mentioned, given that edge locations can often be in remote areas with little physical supervision. However, he recommended encrypting any sensitive upstream data that is sent to the cloud or other data center for further analytics and processing to prevent it from being intercepted in transit.

Piggybacking on Cloud for Scale

Technologies initially designed to ease cloud portability, interoperability, and management issues – such as containers and virtualization – are also a boon to AI-driven edge deployments, said McDowell, in that they abstract workloads away from the underlying hardware.

“Whether using containers through [Red Hat] OpenShift or integrated, native virtualization, both on Nutanix, managing the configuration of a node or a fleet of nodes without having to physically touch a machine to manage or update it becomes a pushbutton operation,” he explained.

“So I don't need expertise on site, which is a key enabler for edge. If you have to have trained IT specialists wherever you're deploying infrastructure, that doesn't scale. And edge computing is all about scalability,” McDowell said.

Joanie Wexler is a contributing writer and editor with more than 30 years of experience covering the business implications of IT and computer networking technologies.

© 2024 Nutanix, Inc. All rights reserved. For additional legal information, please go here.

Related Articles