Archives

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 9, 2020

Incoming

fifth-element-destroys-evil.pngThis week saw the Antitrust Subcommittee of the (US) House Judiciary Committee release the 449-page report (PDF) on its 16-month investigation into Google, Apple, Facebook, and Amazon - GAFA, as we may know them. Or, if some of the recommendations in this report get implemented, *knew* them. The committee has yet to vote on the report, and the Republican members have yet to endorse it. So this is very much a Democrats' report...but depending how things go over the next month, come January that may be sufficient to ensure action.

At BIG, Matt Stoller has posted a useful and thorough summary. As he writes, the subcommittee focused on a relatively new idea of "gatekeeper power", which each of the four exercises in its own way (app stores, maps, search, phone operating systems, personal connections), and each of which is aided by its ability to surveil the entirety of the market and undermine current and potential rivals. It also attacks the agencies tasked with enforcing the antitrust laws for permitting the companies to make some 500 acquisitions. The resulting recommendations fall into three main categories: restoring competition in the digital economy, strengthening the antitrust laws, and reviving antitrust enforcement.

In a discussion while the report was still just a rumor, a group of industry old-timers seemed dismayed at the thought of breaking up these companies. A major concern was the impact on research. The three great American corporate labs of the 1950s to 1980s were AT&T's Bell Labs, Xerox PARC, and IBM's Watson. All did basic research, developing foundational ideas for decades to come but that might never provide profits for the company itself. The 1984 AT&T breakup effectively killed Bell Labs. Xerox famously lost out on the computer market. IBM redirected its research priorities toward product development. GAFA and Microsoft operate substantial research labs today, but they are more focused on the technologies, such as AI and robotics, that they envision as their own future.

The AT&T case is especially interesting. Would the Internet have disrupted AT&T's business even without the antitrust case, or would AT&T, kept whole, been able to use its monopoly power to block the growth of the Internet? Around the same time, European countries were deliberately encouraging competition by ending the monopolies of their legacy state telcos. Without that - or with AT&T left intact - anyone wanting to use the arriving Internet would have been paying a small fortune to the telcos just to buy a modem to access it with. Even as it was, the telcos saw Voice over IP as a threat to their lucrative long distance business, and it was only network neutrality that kept them from suppressing it. Today, Zoom-like technology might be available, but likely out of reach for most of us.

The subcommittee's enlistment of Lina Khan as counsel suggests GAFA had this date from the beginning. Khan made waves while still a law student by writing a lengthy treatise on Amazon's monopoly power and its lessons for reforming antitrust law, back when most of us still thought Amazon was largely benign. One of her major points was that much opposition to antitrust enforcement in the technology industry is based on the idea that every large company is always precariously balanced because at any time, a couple of guys in a garage could be inventing the technology that will make them obsolete. Khan argued that this was no longer true, partly because those two garage guys were enabled by antitrust enforcement that largely ceased after the 1980s, and partly because GAFA are so powerful that few start-ups can find funding to compete with them directly and rich enough to buy and absorb or shut down anyone who tries. The report, like the hearings, notes the fear of reprisal among business owners asked for their experiences, as well as the disdain with which these companies - particularly Facebook - have treated regulators. All four companies have been repeat offenders, apparently not inspired to change their behavior by even the largest fines.

Stoller thinks that we may now see real action because our norms have shifted. In 2011, admiration for monopolists was so widespread, he writes, that Occupy Wall Street honored Steve Jobs' death, whereas today US and EU politicians of all stripes are targeting monopoly power and intermediary liability. Stoller doesn't speculate about causes, but we can think of several: the rapid post-2010 escalation of social media and smartphones; Snowden's 2013 revelations; the 2016 Cambridge Analytica scandal; and the widespread recognition that, as Kashmir Hill found, it's incredibly difficult to extricate yourself from these systems once you are embedded in them. Other small things have added up, too, such as Mark Zuckerberg's refusal to appear in front of a grand committee assembled by nine nations.

Put more simply, ten years ago GAFA and other platforms and monopolists made the economy look good. Today, the costs they impose on the rest of society - precarious employment, lost privacy, a badly damaged media ecosystem, and the difficulty of containing not just misinformation but anti-science - are clearly visible. This, too, is a trend that the pandemic has accelerated and exposed. When the cost of your doing business is measured in human deaths, people start paying attention pretty quickly. You should have paid your taxes, guys.


Illustrations: The fifth element breaks up the approaching evil in The Fifth Element.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.