Them
In the first two minutes of a recent episode of the BBC program Panorama, "Are You Scared Yet, Human?", the word that kept popping out was "it". The program is largely about the AI race between the US and China, an obviously important topic - see Amy Webb's recent book, The Big Nine. But what I wanted to scream at the show's producers was: "AI is not *it*. AI is *they*." The program itself proved this point by seguing from commercial products to public surveillance systems to military dreams of accurate targeting and ensuring an edge over the other country.
The original rantish complaint I thought I was going to write was about gendering AI-powered voice assistants and, especially, robots. Even though Siri has a female voice it's not a "she". Even if Alexa has a male voice it's not a "he". Yes, there's a long tradition of dubbing ships, countries, and even fiddles "she", but that bothers me less than applying the term to a compliant machine. Yolande Strengers and Jenny Kennedy made this point quite well in their book The Smart Wife, in which they trace much of today's thinking about domestic robots to the role model of Rosie, in the 1960s outer space animated TV sitcom The Jetsons. Strengers and Kennedy want to "queer" domestic robots so they no longer perpetuate heteronormative gender stereotypes.
The it-it-it of Panorama raised a new annoyance. Calling AI "it" - especially when the speaker is, as here, Jeff Bezos or Elon Musk - makes it sound like a monolithic force of technology that can't be stopped or altered, rather than what it is: am umbrella term for a bunch of technologies, many of them experimental and unfinished, and all of which are being developed and/or exploited by large companies and military agencies for their own purposes, not ours. "It" hides the unrepresentative workforce defining AI's present manifestation, machine learning. *This* AI is "systems", not a *thing*, and their impact varies depending on the application.
Last week, Pew Research released the results of a survey it conducted in 2020, in which two-thirds of the experts they consulted predicted that ethics would not be embedded in AI by 2030. Many pointed out that societies and contexts differ; that who gets to define "ethics" is crucial, and that there will always be bad actors who ignore whatever values the rest of us agree on. The report quotes me saying it's not AI that needs ethics, it's the *owners*.
I made a stab at trying to categorize the AI systems we encounter every day. The first that spring to mind are scoring applications whose impact on most people's lives appears to be in refusing access to things we need - asylum, probation in the criminal justice system, welfare in the benefits system, credit in the financial system - and assistance systems that answer questions and offer help, such as recommendation algorithms, search engines, voice assistants, and so on. I forgot about systems playing games, and since then a fourth type has accelerated into public use, in the form of identification systems, almost all of them deeply flawed but being deployed anyway: automated facial recognition, emotion recognition, smile detection, and fancy lie detectors.
I also forgot about medical applications, but despite many genuine breakthroughs - such as today's story that machine learning has helped develop a blood test to detect 50 types of early-stage cancer - many highly touted efforts have been failures.
"It"ifying AI makes many machine learning systems sound more successful than they are. Today's facial recognition is biased and inaccurate . Even in the pandemic, Benedict Dellot told a recent Westminster Health Forum seminar on AI in health care, the big wins in the pandemic have come from conventional data analysis underpinned by new data sharing arrangements. As examples, he cited sharing lists of shielding patients with local authorities to ensure they got the support they needed, linking databases to help local authorities identify vulnerable people, and repurposing existing technologies. But shove "AI" in the name and it sounds more exciting; see also "nano" before this and "e-" before that.
Maybe - *maybe* - one day we will say "AI" and mean a conscious, superhuman brain as originally imagined by science fiction writers and Alan Turing. Machine learning is certainly not that. as Kate Crawford writes in her recent Atlas of AI. Instead, we're talking about a bunch of computers calculating statistics from historical data, forever facing backward. And, as authors such as Sarah T. Roberts and Mary L. Gray and Siddharth Suri have documented, very often today's AI is humans all the way down. Direct your attention to the poorly-paid worker behind the curtain.
Crawford's book reminded me of Arthur C. Clarke's famous line, "Any sufficiently advanced technology is indistinguishable from magic." After reading her structural analysis of machine-learning-AI, it morphed into: "Any technology that looks like magic is hiding something." For Crawford, what AI is hiding is its essential nature as an extractive industry. Let's not grant these systems any more power than we have to. Breaking "it" apart into "them" allows us to pick and choose the applications we want.
Illustrations: IBM's Watson winning at Jeopardy; its later adventures in health care were less successful.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.