AI is already an essential component of many industries, using decisions made by sophisticated software algorithms to control physical machinery in the real world. Traffic lights and elevators, for instance, have already become more self-controlling than many who use them realize.
In the coming years, such ”cyber-physical systems” will accelerate their reach into our lives, our societies and the physical world. That’s why anthropologist Genevieve Bell is working to change the way many people think about AI and automation, beginning with how they design and plan for the long term.
Bell spent 20 years as a researcher at Intel, where she was the first female senior fellow, a role of distinction that she still holds. She is a futurist and AI practitioner grounding in anthropology, which means her approach is not to dictate answers, but to ask questions.
“We need to take AI safely to scale by thinking about the big questions that cyber-physical systems raise, and how we go about answering them,” said Bell in a call from the Australian National University in Canberra, where in early 2019 she launched an ambitious new program called the 3A Institute – Autonomy, Agency and Assurance. 3Ai launched a masters program in early 2019.
[Related story: Coronavirus Gives AI and Big Data Chance to Shine]
The distinguished professor is taking a Silicon Valley approach to the 3Ai curriculum: start teaching now and adapt as you learn during the academic year. Her main aim, and that of the institute, is to build a new branch of engineering to bring AI responsibly, safely and sustainably to scale.
Her time in Silicon Valley strengthened her belief in the need to assert the importance of people, and the diversity of our lived experiences, into conversations about technology and the future.
“Since returning home to Australia in 2017 and establishing the 3Ai, I have been increasingly struck by the complicated dance of being human in this world I was helping make digital, and what we could and should be doing differently,” she said.
Bell listed several big-picture questions AI creators should ask before deciding what sort of AI to build and if it is succeeding.
How autonomous should our AI systems be? How much agency should they have? These are fundamental questions. She suggests that AI creators deeply consider what and even whether to build before they begin thinking about implementation.