The Ethics of Digitally Networked Beings
“Eudaimonia as elucidated by Aristotle is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live,” the IEEE guidelines explain. The guideline’s mission statement reads:
To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.
The guide advocates for ‘holistic economic prosperity’ not ‘one-dimensional goals like productivity increase or GDP growth.’ This wisdom is illustrated in
In keeping with the idea of benefiting humanity and not just the bottom line, the guidelines propose that we use metrics that reflect well being, such as societal happiness, to give a broader understanding of the impact of technology. Inclusion and diversity arise repeatedly, whether in terms of the philosophical systems we must consider, or the range of background and experience one should find on technical review boards, or in the process of creating the document itself. Again and again, the message I heard in this text was that technology must serve us; technology should be aligned with our well being. Technology should help humans flourish.
I found myself thinking about autonomous cars. If societal well-being had been taken into account, I do not believe that autonomous cars would be a touchstone for so many of the current ethics and AI debates. We would, instead, be investing in safe, rapid, and inexpensive public transportation and figuring out the best new routes. But we have autonomous cars, and now we struggle with the questions around them, how safe should they be before we deploy them and how do we judge that, how will they impact humans and our environment, and how should the cars themselves make choices. We may be becoming digitally networked beings, but we are also asking our digital technologies to make decisions that have traditionally belonged only to us. The IEEE guidelines read:
A self-driving vehicle’s prioritization of one factor over another in its decision-making will need to reflect the priority order of values of its target user population, even if this order is in conflict with that of an individual designer, manufacturer, or client
A 2017 survey of New York City residents found that 80 percent of respondents chose to “swerve to avoid hitting the pedestrian.” Only 5 percent prioritized the car occupant (and 15 percent said ‘don’t know’); a 2016 Car and Driver story reports that “Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.” Could Mercedes-Benz autonomous cars be banned from the New York streets? Could a car automatically shift in and out of kill-pedestrians mode, depending on its location? These were my thoughts as I clicked the link from a TechCrunch story entitled “Play this killer self-driving car ethics game” and found myself on MIT’s Moral Machine. Indeed, I felt as if I were playing a game as I sat at my computer and chose who would live — the pregnant pedestrian or the not pregnant one, the three passengers or the single woman crossing the street — and who would die. I was not the driver or the pedestrian, I was the removed decision maker, choosing one of two lanes, if only because I was not offered the option to simply turn right and crash into the wall.
The IEEE guidelines include a glossary that contains terms along with what the authors have identified as “common definitions” used in four professional areas: Computational Disciplines; Engineering; Government, Policy, and Social Sciences; Ethics and Philosophy; as well as in ‘Ordinary language’. Where no common definition was identified, the glossary welcomes recommendations. For the word ‘transparency’, for example, the ordinary language definition is “Easily seen through, recognized, understood, or detected (OED); Sufficient illumination to confer comprehension;” Common definitions for transparency in the fields of Engineering and Computational Disciplines are not yet provided. As I looked at the placeholder text, “We welcome recommendations!” I thought yet again about autonomous cars. The guidelines note:
A/IS should be designed with transparency and accountability as primary objectives. The logic and rules embedded in the system must be available to overseers of systems, if possible. If, however, the system’s logic or algorithm cannot be made available for inspection, then alternative ways must be available to uphold the values of transparency. Such systems should be subject to risk assessments and rigorous testing.
In a recent piece for The Guardian, Andrew Smith points out that “autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates.” He notes that trouble-shooting Toyota Camry’s software after the cars accelerated for no apparent reason required 20 months and two experts, “revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output.”
I think about Dean Pomerleau’s 1991 autonomous vehicle research, which seemed to be going well until the day his car approached a bridge and swerved. “Only after extensively testing his software’s responses to various visual stimuli did Pomerleau discover the problem,” Davide Castelvecchi writes for Nature, “the network had been using grassy roadsides as a guide to the direction of the road, so the appearance of the bridge confused it.”
I have read arguments that promise that autonomous cars — by eliminating human errors caused by distraction, exhaustion, etc — will make our streets safer. Driveways could be replaced by gardens, parking lots with playgrounds. I do not think this is a bad vision, but I still have concerns about how we might get there and who, when we get there, will be able to tap a smart phone app and grab a car home from the playground, and who (calling from a more distant park, or one in a lower-income area, perhaps) will not.
To be clear, I thought about autonomous cars because they exist. They are here, albeit still in test mode. I read IEEE’s Ethically Aligned Design not just as a guide for our future, but a measure to hold up to our world right now as we ask, where exactly are we?
Cathy O’Neil made a compelling point earlier this year in an interview with Wired: “we’re still focusing on the wrong things. We’re not measuring the actual damage being done to democracy, because it is hard to measure democracy. Instead, we’ll focus on things like pedestrian deaths with self-driving cars.”
I admit. I have been thinking a lot about cars, but then, even autonomous cars can play a part in a dystopian surveillance state. “For a start, AVs will record everything that happens in and around them,” the Economist reports. “When a crime is committed, the police will ask nearby cars if they saw anything.” The MIT Technology Review recently ran a story that brings up concerns about privacy issues around the data currently collected by dockless bikes and shared with cities.
Privacy-related issues and recommendations are prevalent in the IEEE Guidelines — from digital guardians, to expressing meaningful consent, to privacy assessments that identify data misuse — along with the acknowledgment that governments can access and potentially abuse our data. The guidelines read:
Government mass surveillance has been a major issue since allegations of collaboration between technology firms and signals intelligence agencies such as the U.S. National Security Agency and the U.K. Government Communications Headquarters were revealed.
The Electronic Frontier Foundation (EFF), in its feedback to the draft, articulates our situation with an admirable crisp clarity:
Only societies like East Germany, that were willing to recruit one informant per 6.5 citizens, could possibly watch and pay attention to all of their citizens’ actions. But the combination of already-deployed surveillance technologies and machine learning for analyzing the data will mean that exhaustive surveillance is becoming possible without the need for such enormous commitments of money and labor. The potential of machine learning to enable such effective large-scale surveillance has reduced the price tag of authoritarianism, and poses a novel threat to free and open societies.
The Atlantic recently dedicated an entire issue to the question, Is democracy dying? In his piece, Jeffrey Goldberg writes:
We find ourselves in the middle of a vast, unregulated, and insufficiently examined experiment to determine whether liberal democracy will be able to survive social media, biotechnology, and artificial intelligence.
The 2017 Pew report on the future of online free speech, which introduced me to the term ‘weaponized narrative,’ is well worth reading as well. One remark that jumped out in particular, if only because it resonated with so much else that I’ve been reading recently, is this observation by John Anderson, director of journalism and media studies at Brooklyn College:
The continuing diminution of what Cass Sunstein once called ‘general-interest intermediaries’ such as newspapers, network television, etc., means we have reached a point in our society where wildly different versions of ‘reality’ can be chosen and customized by people to fit their existing ideological and other biases. In such an environment there is little hope for collaborative dialogue and consensus.
Anne Applebaum, in a piece about the shift towards authoritarianism in Poland, also notes the role of technology in human manipulation. Today’s polarizing political movements do not require the 20th century’s “marching in formation, the torch-lit parades, the terror police,” she writes. Instead, people are simply encouraged to engage with an alternative reality. “Sometimes that alternative reality has developed organically; more often, it’s been carefully formulated, with the help of modern marketing techniques, audience segmentation, and social-media campaigns.”
The IEEE guidelines include recommendations in the areas of affective computing (issues related to emotions and emotion-like control) and for virtual and mixed realities, but as I was reading, I realized that these recommendations speak not just to isolated game environments or personal assistants that ‘nudge’ us towards making ‘healthy choices,’ but to our ‘real’ world as well, the world in which we may not be aware of being surveilled and manipulated, a world in which companies like Facebook and Twitter shape the content we see via algorithms, where bots spread misinformation in an attempt to persuade us or nudge us in a direction in which we might not otherwise go, where 90% of us click one of the top two results returned by a search algorithm like Google’s. If we are evolving into digitally networked beings, the world, too, is evolving to accommodate our awkward new form.
In a Nature piece titled “There is a Blind Spot in AI Research,” Crawford and Calo write, “Autonomous systems are already deployed in our most crucial social institutions, from hospitals to courtrooms. Yet there are no agreed methods to assess the sustained effects of such applications on human populations.”
In a section on well-being and A/IS technologies designed to replicate human tasks, behavior, or emotion, the IEEE guidelines note:
A/IS with sophisticated manipulation technologies (so-called “big nudging”) will be able to guide individuals through entire courses of action, whether it be a complex work process, consumption of free content, or political persuasion.
The Candidate Recommendation in this section that struck me most, if only because it seemed to be such an early step in the progress of our thinking, was the need to ‘determine the status of academic research on the issue of A/IS impacts on human well-being,” and to aggregate the research “in a centralized repository for the A/IS community.”
Centralized repositories have been on my mind for other reasons as well. In a recent piece for the Atlantic, Harari makes the point that democracy and dictatorships are different data processing systems. “Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place,” he writes. The thought made me uncomfortable because I realized how much digital data we have generated and how little control we have over how it is used. I thought of China’s citizen scores. Here is an overview from Darlene Storm’s story in Computerworld:
The new “social credit system” is linked to 1.3 billion Chinese citizens’ national ID cards, scoring them on their behavior and the “activities of friends in your social graph — the people you identify as friends on social media.” Citizens’ credit scores, or “Citizen Scores,” are affected by their own political opinions and the political opinions of their friends as well. The system leverages “all the tools of the information age — electronic purchasing data, social networks, algorithmic sorting — to construct the ultimate tool of social control,” according to Jay Stanley, Senior Policy Analyst for the ACLU Speech, Privacy & Technology Project.
“According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries,” reads a story in Scientific American. In this article, the authors also note that “Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons.”