THOUGHT LEADERSHIP
Robotics have huge potential to enhance health, culture, and civiliation. But only if we clamp down on accountability for making sure learning models are built using the right data.
Whether robots are friend or foe to humankind has long been debated. We have sci-fi writers and Hollywood to thank, in part, for both sides of the argument.
Indeed, there are some fairly disheartening dramas—think Westworld, Blade Runner, and The Matrix—that depict adystopian future in which machines take over, leaving the joys and splend or of civilization in the dust.
At the same time, however, long-ago classics such as The Jetsons, the Star Trek TV series, and 2001: A Space Odyssey inspired several generations of humans to dream big. After percolating for decades, many of those dreams are now becoming realities, and they’re largely making the world a better place.
Personalization, Profiling, And Privacy
All these considerations dance around a fine line between using data for personalization—which most of us seem to want—and profiling, a word that now tends to make the hair on the back of people’s necks stand up.
Nearly all the things we want AI to do for us today is about personalization, whether we’re asking Siri to place a phone call, Alexa to play a song, or Waze to get us to our intended destination. Or we might be building an exoskeleton that will allow a paralyzed person to walk. Irrespective of the use case, getting the right result is all about programming the right data for the individual at hand.And for successful programming, we have to make some generalizations. For example, if data about me says I’m a Georgia Tech professor on email till 10 p.m. every night, a marketing system might assume I’m a coffee drinker and target me accordingly. That could benefit me.
But when do we cross the line between “profiling” for everyone’s benefit and making assumptions and generalizations that could offend, invade someone’s privacy, or even do harm?
These are early days, and these lines of demarcation have not yet been fully drawn. But erroneous assumptions can result in AI bias, and that can have all kinds of unexpected consequences. In 2018 alone, for example, AI bias has caused immigrants to be erroneously deported, unsafe cancer treatments to be recommended, and an “ethnicity detection” feature to be created to search faces in New York City based on race without citizens’ knowledge or permission.
These weren’t AI’s finest hours.
About the Author: Ayanna Howard is an American roboticist and Chair of the School of Interactive Computing at Georgia Institute of Technology. She’s also the founder and CTO of Zyrobotics, LLC, a company that focuses on applying technology in ways that enhance the quality of life for children. Howard holds three patents, and among her many awards are: Brown Engineering Alumni Medal (BEAM), 2016; AAAS-Lemelson Invention Ambassador, 2016-2017; and Forbes' America's Top 50 Women In Tech, 2018. Howard says her favorite robot of all time is Rosie, the frilly maid from “The Jetsons” cartoon.