Archives

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.


Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.