Accenture AI director Rumann Chowdhury explains how CIOs and CTOs need to be ethical in the usage of artificial intelligence.
“Artificial intelligence (AI) is humans doing what they have always done, but at a scale that is difficult to control,” says Accenture AI lead Rumann Chowdhury. In her research, Chowdhury tackles the fears that businesses and society alike have toward AI, and explained how CIOs and CTOs need to consider the ethical standpoint of using AI in their organizations.
“The negative implications of AI are amplified and people talk about deep fakes and misinformation, especially with elections coming up,” Chowdhury said of how AI is said to have been used to change the voting behaviors of the U.S. elections and the Conservative Party E.U. referendum in the United Kingdom.
“Propaganda is not new, it has existed for a very long time, but we now have the ability to influence millions of people with a bot or series of bots. So, it’s the scalability of the impact that is really frightening.”
As well as the damage to information and public discourse, Chowdhury reminds CXOs that employees worry that AI will take their jobs, because the software is “so ruthlessly efficient it will lead to mass job loss”. Others worry about the singularity, when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. More immediately there are concerns about transparency in the use of AI and data; and rising concerns that AI will further damage diversity and opportunity in society.
In her research at Accenture, Chowdhury describes how she frames the concerns toward AI as the 3Is: Immediate, Impactful and Invisible.
Chowdhury says business technology leaders need to be aware of the immediate impact AI has, referring to algorithm changes in major platforms that can instantly alter the circumstances of individuals. Chowdhury refers to the automated debt recovery debacle by Australia’s Department of Human Services, whereby millions of Australians dependent on the state were told to pay back debts – many that didn’t even exist. In this case, automation and data was immediate and impactful, and in a largely negative way to the consumer and the organization.
Since AI is not a tangible object we can see, its invisibility can be a turnoff to people. Chowdhury says that while some organizations attempt to give AI a face, the truth is: “AI is just code. There are no pictures of AI and because of its invisibility it’s scary, especially when coupled with the amount of information it collects."
“We will reach a point where we have too much data,” Chowdhury said of how AI is amplifying data collection, especially in organizations already awash with high levels of data. When asked by CIOs how organizations should manage their data levels she said: “We can learn the most from librarians and archivists, because they have experience knowing what to keep and what to discard. There needs to be a narrative around what data we keep and what we don’t.” She adds that the rise of data protection roles in organizations is the beginning of the corporate world understanding the need for an ethical approach to data and AI.
How To Be Ethical
“The danger with AI is that it’s implemented by humans, either intentionally or unintentionally,” Chowdhury said with great honesty and clarity. As a Ph.D. expert in data, Chowdhury explained that the AI of today is not the neural network technology that has been in development for over a quarter of a century.
“AI was originally meant to mimic the way the human brain works, but that is not how it’s being used today.” Which is where the dangers of intentional or unintentional implementations creep in: “So the ethics of AI is about human beings making decisions and ensuring that we have responsible AI,” she said.
For organizations, this means thinking beyond the bottom line, the Accenture expert said, adding that organizations should consider how their use of AI is part of triple bottom line accounting, which takes into account social, environmental, and financial implications of their business.
Chowdhury said her own career demonstrates the change in cultural attitude to AI by major organizations. In 2017 her role was mostly about raising awareness that organizations needed to be ethical toward AI. In 2018 she said organizations were aware of bias and the problems this causes. This past year she’s seen the corporate world working to create awareness amongst its customer base so they understand the use of AI. Chowdhury admitted that the recent interest has been due to the realization by corporations that AI legislation is on the horizon.
“Now clients are asking for governance methodologies, not just the technical tools to implement AI,” she said, adding that organizations need more than governance, the approach to AI is intrinsically linked to the culture of the organization.
The arrival of the General Data Protection Regulations across the European Union has strengthened the interest on how to use AI ethically, Chowdhury suggested. Organizations like Facebook used to rely on “explainability” stating that they have previously relied on consumers reading the terms of conditions for an app or online service, but now Facebook messages the consumer to check that they’re happy with their privacy settings. Chowdhury went on to explain that this is an example of organizations understanding that transparency is easy to say and demonstrate in a massive software terms agreement document but said transparency without agency is useless.
“Transparency gives you insight, but agency changes how you interact with a system or service,” she said.
A range of new ethical standards are expected to change the use of AI and data in organizations she said, with the engineering body IEEE developing an AI standard, and she expects further standards from the European Union.
“GDPR was revolutionary as it put a stake in the ground so that we all have a common point for discussions about our data,” she said. That discussion Chowdhury believes will begin a society wide debate about AI ethics that CIOs and CTOs will need to consider.
“Privacy will be the biggest narrative of the next year.”