News

Self-Regulation is Key to Future of Ethical Enterprise AI

Experts explain why enterprises need to take a responsible approach to creating AI applications.

February 24, 2025

If you’ve ever watched a science fiction film about unchecked robots that outsmart their human creators and eventually overthrow them, then you know this to be true: Artificial intelligence needs guardrails.

Although a hostile AI takeover is unlikely, rules and standards are nevertheless vital to ensure that artificial intelligence performs accurately and ethically — which is why regulators in both the United States and Europe are circling the wagons.

And yet, some worry that excess regulation could stifle innovation and make the United States and its allies less competitive compared to adversaries like China.

“Our place in the world depends on our ability to invest in AI. Period. End of story,” billionaire investor Mark Cuban told CNBC in a September 2024 interview.

As humanity grapples with the benefits and risks of government oversight, the business community increasingly is favoring a middle ground: AI self-regulation.

“I would much rather have the current companies define reasonable boundaries,” former Google CEO Eric Schmidt told NBC News in a May 2023 interview

“There’s no way a non-industry person can understand what is possible. There’s no one in the government who can get it right.”

By crafting their own rules, business leaders like Schmidt believe that companies can ensure the responsible application of AI without hindering creativity, adoption or advancement.

Regulations on the Rise

The United States has seen a flurry of AI-related regulatory action. At least 45 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills in recent legislative sessions, according to the National Conference of State Legislatures. Of those 45, over 30 adopted resolutions or enacted legislation.

Among the states that have moved to regulate AI are: 

  • Colorado: Colorado has enacted consumer protection legislation that calls for developers of high-risk AI systems to avoid “algorithmic discrimination” (i.e., outcomes that perpetuate bias or inequality) and to be transparent with consumers.

  • Utah: Utah has created the Artificial Intelligence Policy Act, which imposes transparency obligations around generative AI and limits a company’s ability to “blame” AI when AI-generated statements violate consumer protections.

As was the case with data privacy — it famously adopted its General Data Protection Regulation (GDPR) in 2016 — the European Union is leading the way with AI regulation. Its EU AI Act is set to take full effect by August 2026. The law “is intended to promote the uptake of human-centric and trustworthy AI and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law from harmful effects of AI systems while supporting innovation and the functioning of the internal market,” explained global law firm White & Case.

Related Who Owns AI?: The Rise of Artificial Intelligence Patent Law
To protect inventions that leverage artificial intelligence, companies must tailor their patents to the technology.

August 25, 2023

The EU AI Act prohibits some AI uses and regulates others based on risk level, banning certain practices that pose an unacceptable level of risk. 

“For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn’t make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice,” IBM reported in a September 2024 explainer.

The act also prohibits “social scoring” — using AI to evaluate people based on their social behavior, which can lead to detrimental or unfavorable treatment — as well as the use of AI to exploit people's vulnerabilities. Likewise, it puts tight constraints around scraping of facial images from the internet and other biometric-identification practices.

Related Nutanix Builds GenAI App to Empower Sales Team
Nutanix built SalesGPT, the second home-grown GenAI application to improve productivity, is helping its sales teams find answers to complex policy and process questions, reducing response times from days to mere minutes.

February 10, 2025

Critics of regulation worry that laws like the EU AI Act might go too far.

“Regulations that are too rigid can limit crucial development by causing lengthy approval processes and discouraging experimentation,” said Sabas Lin, chief technical officer at AI-driven knowledge management platform Knowee

“For instance, healthcare AI tools with potential to save lives may face challenges meeting stringent standards for high-risk applications.”

The Pros and Cons of Self-Regulation

Embracing voluntary efforts at self-regulation can keep AI innovation moving forward while still delivering responsible and ethical AI, according to Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and co-author of a handbook on AI ethics produced by the Institute for Technology, Ethics and Culture (ITEC).

“Companies know themselves best and are often aware of pitfalls not obvious to regulators,” Skeet said.

“Ethics and innovation are not mutually exclusive. In the long term, doing what is right can save time and money. In the short term, it can appeal to prospective employees, customers and investors. We want technology in the service of humans and not the other way around, and one way to achieve this is to be intentional and responsible about how we develop new technologies like artificial intelligence.”

Related Study Shows Big Uptake of Enterprise AI and Cloud Native Technologies
As generative AI workloads and cloud native technologies proliferate, global decision-makers surveyed for the 2025 Enterprise Cloud Index cite infrastructure, security and talent issues as top deployment and scalability barriers.

February 12, 2025

Through self-regulation, “companies have the leeway to formulate their internal standards not only based on how their applications operate, but also on acceptable ethical standards,” echoed Cache Merrill, founder of the software development consultancy Zibtek.

“If companies take the lead, they can remain creative, be responsible in their innovations and minimize the threat of being heavily regulated in the future.”

Moreover, self-regulation “creates goodwill and builds confidence in stakeholders and consumers,” Zibtek added.

But self-regulation has shortcomings, too. 

“Despite its benefits, self-regulation can draw criticism for weaker oversight and inconsistent standards. If standards vary too widely, it can lead to public concerns and calls for more government involvement,” explained Lin, who said uneven practices can be hard to enforce. 

“Without a way to enforce guidelines, voluntary standards can lose credibility and effectiveness.”

Embracing Ethical AI

Whether they support government intervention or oppose it, companies that believe in ethical AI can begin to self-regulate by looking at existing guidelines and frameworks.

For example, the Asilomar AI Principles coordinated by the Future of Life Institute lay out a number of best practices. Among other things, they suggest that AI developers should prioritize the “beneficial use” of AI — that is, applications that “grow our prosperity through automation while maintaining people’s resources and purpose.”

Likewise, the principles dictate that AI systems should “be safe and secure throughout their operational lifetime,” and should be fully transparent. “If an AI system causes harm,” the principles state, “it should be possible to ascertain why.”

Related Bridging the Gap Between AI’s Promise and Fulfillment
DataRobot CEO Debanjan Saha explains the state of enterprise AI and the challenges of moving beyond the hype to achieve business impact.

February 6, 2025

Some corporations already have published their own standards. Microsoft, for example, has created its own ethical AI policies that focus on:

  • Fairness: AI systems should treat all people fairly.

  • Reliability and safety: AI systems should perform reliably and safely.

  • Privacy and security: AI systems should be secure and respect privacy.

  • Inclusiveness: AI systems should empower everyone and engage people.

  • Transparency: AI systems should be understandable.

  • Accountability: People should be accountable for AI systems.

While specifics will vary across businesses and industries depending on proposed use cases, concerns about data privacy and public reporting responsibilities, among other things, it’s clear that moving toward self-regulation could help the private sector set a high bar for ethical AI, thus minimizing the role of intrusive government oversight. Doing so could empower businesses to move quickly on the promise of AI while still producing ethical and responsible outputs that benefit consumers, businesses and the economy in general.

“Industry can roughly get it right, and then the government can put a regulatory structure around it,” suggested Schmidt.

Adam Stone is a journalist with more than 20 years of experience covering technology trends in the public and private sectors.

© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles