Empowering Government with AI: Policies, Challenges and Solutions

Local, state and federal governments are harnessing the power of AI for the public good.

By Adam Stone

By Adam Stone July 11, 2024

Whether it’s educating children, keeping drinking water safe, putting out fires, fighting crime, building roads and bridges, or defending the nation from foreign adversaries, government agencies have an important role to play in keeping citizens safe, healthy, and happy. Managing all of this requires focus, care and, increasingly, modern data technologies.

Government agencies face huge challenges, including, in many cases, relying on legacy IT infrastructure. Many agencies are using decades-old technology to meet the needs of a growing tech-savvy population. As many aspects of government "go digital," they face IT staffing and skills shortages, making it difficult to remain innovative while complying with regulations and requirements.

Artificial intelligence has great potential for government agencies, according to Jason Langone, senior director of global AI business development at Nutanix. In a video interview with SNG’s AI Scoop, Langone described three ways industries and government agencies are adopting AI:

  • Predictive AI to glean insights into existing data
  • Generative AI with its potential to improve workforce productivity
  • Copilot AI for content or software code creation.

By using AI to understand vast datasets, government agencies can gain new insights that help them optimize supply chains, move people safely around the world and protect public health, noted Langone, who said government IT teams using agency-specific AI chatbots might soon be able to “ask a question and … actually get something that has context,” which could make the public sector more responsive to opportunities and more resilient to threats.

But AI in government is not an automatic homerun. Because a host of AI implementation challenges await, government agencies will need strong rules of the road to ensure the technology functions responsibly and operates effectively in a modernized IT ecosystem.

Challenges and Opportunities

There have already been efforts to craft government AI policies at the highest levels. In the United States, for instance, President Joe Biden issued an executive order in fall 2023 on the “safe, secure, and trustworthy development and use of AI.” 

International governments are moving, too. Italy, for instance, recently adopted the world’s first package of AI laws.

Related

IT Leaders Get AI-Ready and Go

The effort to formulate AI governance comes with challenges, especially when one considers the urgent need for timely action.

Ahed of the United States’ November 2024 Election, the potential for AI “deepfakes” to undermine voter confidence creates an “urgent need for robust ethical guidelines and regulatory frameworks,” said Charles Palmer, executive director of the Center for Advanced Entertainment and Learning Technologies at Harrisburg University.

David Dunmoyer, campaign director for Better Tech for Tomorrow, a project within the Texas Public Policy Foundation, suggested that government entities may be hampered by their internal processes. 

“When facing the unprecedented implications of a ‘shiny object’ like AI, they tend to be overwhelmed by where to even start,” Dunmoyer said.

Government IT departments already have a lot on their hands, for example, with the migration to hybrid cloud and efforts to meet the demand for remote working. Today, “government agencies are already struggling with getting their data houses in order,” Dunmoyer said. Yet, “it’s imperative that government get the fundamentals in place, like data integrity and cyber hygiene, before incorporating complex, consequential technology like AI.”

Still, significant opportunities await. If they can establish solid policies now, government agencies will be well-positioned to leverage AI in support of improved productivity and enhanced mission outcomes.

Early Successes

A number of early efforts demonstrate how the government at all levels is solving AI’s challenges and seizing its opportunities.

At the local level, for example, the City of Seattle was among the first to create a generative AI policy. Developed by city employees and a Generative AI Advisory Team, the policy establishes principles for the city’s use of AI: The policy states that AI outputs should be valid and verifiable while ensuring equity and protecting privacy.

States are also starting to weigh in. New Jersey Gov. Phil Murphy, for example, has deployed a policy meant to guide state workers in the responsible use of generative AI. The guidelines will help ensure “that our public professionals can use these powerful tools responsibly and confidently,” Murphy said upon announcing the policy.

Related

AI in the C-Suite: Using Artificial Intelligence to Shape Business Strategy

Meanwhile, the federal government has taken a number of steps to establish guardrails around AI.

First came the October 2023 White House executive order. Then, in March 2024, the Office of Management and Budget (OMB) followed up with the first government-wide policy to mitigate AI risks and harness its potential benefits.

According to the White House, these new federal AI safeguards include “a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public.” The policy also lays out strategies for developing AI responsibly and for growing an AI-savvy federal workforce.

“We can applaud the efforts to get in front of this issue and get people, universities and companies talking about it in broad daylight,” Palmer said. “We’re seeing what I think are fairly strong signals that the government doesn’t want AI development to happen in secrecy or in a silo.”

Industries are following suit. At a recent Global AI Summit, for example, the United Kingdom and South Korea secured a commitment from 16 global AI companies to support a set of safety outcomes. That commitment “shows that there is growing awareness of the AI risks and a willingness to commit to mitigating them,” said Joseph Thacker, principal AI engineer and security researcher at SaaS security company AppOmni.

Driving Positive Outcomes

At a high level, these and other government AI policies demonstrate the public sector’s leadership and innovation in artificial intelligence. At a more granular level, many government agencies are moving from AI policy to AI implementation to prove how AI can benefit the citizens and communities they serve. Among them are the National Oceanic and Atmospheric Administration (NOAA), the U.S. Department of Veterans Affairs (VA) and the U.S. Department of Homeland Security (DHS).

  • NOAA is forecasting weather hazards: High temperatures are the leading weather-related cause of death in America, and NOAA is tapping AI to tackle the risk. Analyzing “heat islands” in American cities, it’s working to protect the public from the impacts of extreme weather.

  • The VA is serving those who served: The VA uses AI to process feedback from veterans on their experience interacting with the agency. This leads to more effective case management, improved visibility into trends and sentiments, and, ultimately, better experiences and outcomes for vets.

  • DHS secures the nation: DHS has demonstrated the impact of AI in a number of high-profile efforts. In 2023, for example, U.S. Customs and Border Protection (CBP) used AI to identify suspicious patterns in vehicular border crossings, leading agents to discover 75 kilograms of narcotics hidden in one suspicious vehicle. Also, in 2023, Homeland Security Investigations (HSI) announced what is described as “one of the most successful operations ever against online child sexual abuse” — Operation Renewed Hope. Using AI, it took HSI less than a month to identify 311 previously unknown victims of sexual exploitation, rescue several abuse victims and arrest multiple suspects.

State governments have also celebrated early successes. For example, the Texas Department of Transportation uses AI for “incident and accident detection, providing them with real-time information to more swiftly deploy crews to mitigate traffic congestion and human injury,” Dunmoyer said. 

Related

Can Open Source Software Help Resolve AI Trust Issues?

The agency is also finding promising applications in payment processing, traffic management, and evaluating aging infrastructure.

Meanwhile, the Texas Workforce Commission “developed a chatbot named Larry that is being used for call screening to accurately transfer Texans to the right department,” Dunmoyer continued. “During the pandemic, with jobless claims at record highs, Larry helped the state manage the influx of unemployment cases with great effect.”

These are just a few of the early efforts governments have made to leverage the game-changing potential of AI. New use cases “continue to surface each and every day,” noted Langone, who said governments wishing to maximize AI’s returns will need to focus on delivering AI skills across the public-sector workforce while also ensuring the right IT systems are in place to overcome AI implementation challenges.

It will be “important to understand what are the platforms I’m going to deploy these workloads on,” concluded Langone, who said cloud will be a boon. Supported by strong policy and governance, the deployment of AI applications on already familiar platforms will accelerate a government agency’s ability to take maximum advantage of this increasingly powerful capability.

Adam Stone writes frequently about the intersection of government and technology.

© 2024 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles