easy to understandinggood grammar
THE GOOD LIFE
The fifth question asks you to consider: what is your personal reflection on professional ethics in relation to your goals as a media professional, media maker, and/or a professional in another field who will remain a media consumer? Focus your essay on a single virtue that is related to a personal and/or professional goal that is important to you and central to your idea of what it means for you to live “the good life.” Include a consideration how dimensions of social ethics (how society is structured in relation to racism as well as sexism, homophobia etc.) come into play in your understanding of the good life. Possible virtues should be broad and inspiring to you, such as “be courageous,” or “be fair,” and include a discussion of how you envision this theme playing out in relation to two more focused goals such as: contribute to a just workplace, participate in demanding increased accountability from those in power, work to challenge systemic racism, contribute to a more just media system, challenge sexual harassment and gender-based discrimination, practice empathy and compassion in situations of frustration, develop a mindset of generosity, be a great listener and advocate, engage in conflict resolution, learn from your experiences, learn from those whose lived experiences differ from yours, put others’ needs before your own, hold yourself accountable for your actions and engage in reparative actions when you have wronged others. Cite at least three sources from the class materials. The goal of this essay is to demonstrate your fluency with the concepts of ethical reasoning and your ability to synthesize materials from the class into reflections on your own life goals and aspirations. Allow 1 hour to review your materials and at least 1 hour to write the essay. 300-400 words.
R
ead this for a model of what this essay can include.
Links to an external site.
Rubric:
Selection of your virtue of focus
Definition of that virtue
How you hope that this virtue will play out in at least two career-focused goals
Discussion of what it means to live “the good life” for you and how social ethics comes into play in this definition
Citations of three sources from class materials
Contents
Series title
Title page
Copyright page
In memoriam
Foreword by Luciano Floridi
Preface to the Third Edition
Notes
Acknowledgments
1 Central Issues in the Ethics of Digital Media
Chapter overview
Case-study: Amanda Todd and Anonymous
Introduction
(Ethical) life in the (post-)digital age?
1. Digital media, analogue media: convergence and ubiquity
2. Digital media and “greased information”
3. Digital media as communication media: fluidity, ubiquity,
global scope, and selfhood/identity
Digital media ethics: How to proceed?
Is digital media ethics possible? Grounds for hope
How to do ethics in the new mediascape: Dialogical approaches,
difference, and pluralism
Further considerations: Ethical judgments
Overview of the book, suggestions for use
Chapter arrangement, reading suggestions
Case-studies; discussion/reflection/writing/research
questions
Notes
2 Privacy in the (Post-)Digital Era?
Chapter overview
Information and privacy in the global digital age
“Privacy” and anonymity online – is there any?
Interlude: Can we meaningfully talk about “culture?”
“Privacy” in the global metropolis: Initial considerations
You don’t have to be paranoid – but it helps …
If you’re not paranoid yet … terrorism and state surveillance
“Privacy” and private life: Changing attitudes in the age of
social media and mobile devices
“Privacy” and private life: Cultural and philosophical
considerations
“Privacy” and private life: First justifications, more cultural
differences – transformations and (over-?)convergence
“Privacy” and private life: Cultural differences and ethical
pluralism
Philosophical and sociological considerations: New selves, new
“privacies?”
1. Culture?
2. The privacy paradox
Notes
3 Copying and Distributing via Digital Media: Copyright, Copyleft,
Global Perspectives
Chapter overview
The ethics of copying: Is it theft, Open Source, or Confucian
homage to the master?
Intellectual property: Three (Western) approaches
(a)Copyright in the United States and Europe
(b)Copyleft/FLOSS
FLOSS in practice: the Linux operating system
FLOSS in practice
2. Intellectual property and culture: Confucian ethics and
African thought
Notes
4 Friendship, Death Online, Slow/Fair Technology, and Democracy
Chapter overview
Friendship online? Initial considerations
Friendship online: Additional considerations
Friendship – and death – online
Slow technology and the Fairphone
Case-study: Are you ethically obliged to purchase a Fairphone?
Digital media and democratization: First considerations
Democracy, technology, cultures
Notes
5 Still More Ethical Issues: Digital Sex, Sexbots, and Games
Chapter overview
Introduction: Is pornography* an ethical problem – and, if so,
what kind(s)?
Pornography*: More ethical debates and analyses
Pornography* online: A utilitarian analysis
“Complete sex” – a feminist/phenomenological perspective
Sex with robots, anyone?
Now: What about games?
Sex and violence in games
Notes
6 Digital Media Ethics: Overview, Frameworks, Resources
Chapter overview
A synopsis of digital media ethics
Basic ethical frameworks
1. Utilitarianism
Strengths and limits
(a)How do we numerically evaluate the possible
consequences of our acts?
(b)How far into the future must we consider?
(c)For whom are the consequences that we must
consider?
2. Deontology
Difficulties …
3. Meta-ethical frameworks: Relativism, absolutism
(monism), pluralism
Ethical relativism
Ethical absolutism (monism)
Beyond relativism and absolutism: Ethical pluralism
Strengths and limits of ethical pluralism
4. Feminist ethics
Applications to digital media ethics
5. Virtue ethics
Virtue ethics: sample applications to digital media
6. Confucian ethics
Confucian ethics and digital media: sample applications
7. African perspectives
Applications
Notes
References
Index
End User License Agreement
Series title
Digital Media and Society Series
Nancy Baym, Personal Connections in the Digital Age, 2nd edition
Mercedes Bunz and Graham Meikle, The Internet of Things
Jean Burgess and Joshua Green, YouTube, 2nd edition
Mark Deuze, Media Work
Andrew Dubber, Radio in the Digital Age
Quinn DuPont, Cryptocurrencies and Blockchains
Charles Ess, Digital Media Ethics, 3rd edition
Jordan Frith, Smartphones as Locative Media
Alexander Halavais, Search Engine Society, 2nd edition
Martin Hand, Ubiquitous Photography
Robert Hassan, The Information Society
Tim Jordan, Hacking
Graeme Kirkpatrick, Computer Games and the Social Imaginary
Tama Leaver, Tim Highfield, and Crystal Abidin, Instagram
Leah A. Lievrouw, Alternative and Activist New Media
Rich Ling and Jonathan Donner, Mobile Communication
Donald Matheson and Stuart Allan, Digital War Reporting
Dhiraj Murthy, Twitter, 2nd edition
Zizi A. Papacharissi, A Private Sphere: Democracy in a Digital
Age
Jill Walker Rettberg, Blogging, 2nd edition
Patrik Wikström, The Music Industry, 3rd edition
Digital Media Ethics
Third Edition
CHARLES ESS
polity
Copyright page
Copyright © Charles Ess 2020
The right of Charles Ess to be identified as Author of this Work has been asserted in
accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2009 by Polity Press
This edition published in 2020 by Polity Press
Polity Press
65 Bridge Street
Cambridge CB2 1UR, UK
Polity Press
101 Station Landing
Suite 300
Medford, MA 02155, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and
review, no part of this publication may be reproduced, stored in a retrieval system or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or
otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-3342-8
ISBN-13: 978-1-5095-3343-5(pb)
A catalogue record for this book is available from the British Library.
Typeset in 10.25 on 13pt Scala
by Fakenham Prepress Solutions, Fakenham, Norfolk NR21 8NL
Printed and bound in Great Britain by TJ International Limited
The publisher has used its best endeavors to ensure that the URLs for external websites
referred to in this book are correct and active at the time of going to press. However, the
publisher has no responsibility for the websites and can make no guarantee that a site will
remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the
publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website: politybooks.com
http://politybooks.com
In memoriam
Barbara Becker (1955–2009): gifted and energetic philosopher,
among the earliest to conjoin phenomenology, embodiment, and
computational technologies in what proved to be prophetic and
prescient ways
Preston K. Covey, Jr. (1942–2006): pioneer in conjoining
philosophy and computation, including ethics, questions of
democracy, and educational computing, and co-founder of what is
now the International Association for Computing and Philosophy
(IACAP)
Henry Rosemont, Jr. (1934–2017): leading authority in Chinese
philosophy, tireless promoter of comparative philosophy and the
liberal arts, inspiring activist and most generous mentor
Brilliant colleagues, generous and patient teachers, good friends: their
spirits and guiding insights inform and inspire much of my life as well
as this book.
Foreword
Luciano Floridi
A common risk, run by many forewords, is to bother the reader by
repeating, sometimes less accurately, what the table of contents of the
book already specifies or (and unfortunately this is often an inclusive
or) by eulogizing the text and the author, plastering comments that
look like semantic clones lifted from a myriad of other texts. It is in
order to try to avoid both pitfalls that I shall skip here the usual hypes
– which the book and its author do deserve, make no mistake – in
order to speak to the reader a bit more frankly and hence, I hope, less
uninformatively.
Like the previous edition, this third edition has all the usual virtues of
a good textbook: it is carefully researched, clearly written, and argued
intelligently. Yet these are basic features that we have come to expect
from high-standard scholarship and do not make it special. That
Charles Ess has written a good textbook is uninteresting. That he
might have written an excellent (and now newly updated) one is what I
would like to argue. What the book offers, over and above its
competitors, are some remarkable and, to my knowledge, unique
features. Let me be schematic. The list is not exhaustive, nor do the
listed features appear in order of importance, but there is a good
narrative that keeps them together.
First, the topic. The book addresses the gray but crucial area of ethical
concerns raised by digital media. Of course, it is flanked on the shelf
by many other textbooks in information and computer ethics, data
ethics, AI ethics, and digital ethics (the terminology varies but topics
often overlap), even more so than when the second edition was
published, but, as Charles Ess well explains, this is not one of them,
and it sticks out for its originality. For the book tackles that messy area
of our ordinary lives where ethical issues are entangled with digital
mass media, communication artifacts, information technologies of all
sorts, computational processes, computer-mediated social
interactions, algorithms, and so forth. Indeed, it is one of its virtues
that it tries to clarify that “so forth” which I have just somewhat
surreptitiously added in order to spare myself the embarrassment of a
lack of a clear definition. As Schrödinger once said in a different
context, this is a very sharp picture of a rather fuzzy subject.
Second, the approach. The book has all the required philosophical
rigor, but, once again, this is not its most impressive feature. It is also
graced by a light touch, which means that Ess has avoided being either
prescriptive or proscriptive (you will not be told what to do and what
not to do), opting in favor of an enlightened (liberal, in his own
words), critical description of the problems discussed. This is a
noteworthy advantage, since the author empowers the reader, as
should be (but often is not) the case with similar texts. Having said all
this, the feature that I find unique and outstanding (in the literal sense
that it makes this book stand out on the ideal shelf of other
comparable books) is its capacity to combine a pluralistic approach –
without the bitter aftertaste of some crypto-relativism – with a well-
informed and timely look into non-Western views on the ethical issues
it tackles. This is crucial. Following a remarkable tradition of German
philosophers (Nietzsche, Schopenhauer, Hegel), Ess makes a
sustained and successful effort to bring together Eastern and Western
ethical traditions in an enriching and fascinating synthesis. And he
achieves all this thanks to his extended, international experiences with
a variety of cultures. If you wish to see how masterfully he avoids
syncretism, relativism, and dogmatism and succeeds in shaping an
overview of the field which is both captivating and ethically robust,
you need to read the book. This was already a great feature of the
second edition – it is now quite essential given the importance of
China’s role in the development of digital technologies and solutions.
Third, the style. This is a reader-friendly book that teaches without
patronizing, with a didactic style that can only be the result of decades
of care and experience in guiding students and readers through
difficult topics. Its degree of accessibility is as misleading as the ability
of an acrobat to make her performance look effortless. The third
edition just got even friendlier.
Many things are like pornography: it is very difficult to define them,
but you recognize them immediately when you see them. Digital media
are not an exception. Because we all know what digital media are, even
if it is hard to determine the exact boundaries of their nature,
applications, evolutions, and effects on our lives, I am confident that
the reader will understand why I would recommend this book not only
inside but also outside the classroom. Given its topic, its approach,
and its style, this is a book for the educated public as well. It should be
read by anyone interested in the development and future of the
information society and our moral lives within it.
Preface to the Third Edition
No one was more surprised – and then, gratified beyond measure – by
the successes of the first edition of this little book. And then came
suggestions that a second edition might be in order – and then a third:
well, what are surprise and immeasurable gratification squared and
then cubed?
Many good comments from colleagues and students who have used
the book indicate that “success” here means first of all pedagogical
success. The book is designed precisely as a classroom text for use
across a wide range of academic disciplines. My intention is that it
should be accessible and useful for “the rest of us” – all of us who are
neither technology professionals nor philosophically trained ethicists.
The guiding assumption here (from Aristotle, along with many other
global traditions) is that we are already ethical beings, already
equipped with experience and capacities in ethical judgment
(phronēsis). The aim is to provide a basic ethical toolkit for better
coming to grips with the many ethical challenges that confront us all
as consumers and citizens, even designers of a digital media lifeworld.1
The broad strategy conjoins primary ethical frameworks and theory
with specific ethical experiences in our digital existence – increasingly,
as several examples argue, our post-digital existence.2 And lots of
practice by way of the “Reflection/discussion/writing questions”
designed to provoke and guide reflection and discussion that apply the
ethical insights and theories to central examples. On a good day,
students and readers will thereby become more adept in using these
ethical tools to more confidently and successfully take on newer
challenges most certainly to come.
These structures and approaches apparently work – hence (again) a
new edition. But to state the painfully obvious: things change fast in
our technological world. This was certainly true for the three years
between the first (2009) and second editions (2012): it is all the more
the case for the subsequent six or so years. Quantitatively: ever more
people in the world are connecting to the internet, increasingly via
mobile devices. Along the way, the past six years have witnessed the
increasing roles of Big Data and Artificial Intelligence (AI), and an
emerging Internet of Things (IoT), along with social robots and
sexbots. Qualitatively: the optimism driving much of the development
and visions of “the internet” from the early 1990s onward appears to
have peaked around 2012 following the first-blush successes of the
2011 Arab Springs. Early enthusiasm surrounding these so-called
“Twitter Revolutions” or “Facebook Revolutions” was soon tempered
by the harsh realities of the Arab Winters of 2013 and thereafter. With
the one shining exception of Tunisia, these democratization
movements were brutally crushed, in part as regimes learned how to
censor and manipulate social media. They further transformed these
technologies into infrastructures of total state surveillance – including
in ostensibly more democratic societies, as Edward Snowden’s
revelations of the US National Security Agency’s surveillance
programs documented.
Reasons for pessimism have continued to pile up. They include the
Cambridge Analytica scandals and the resulting manipulations of the
2016 US elections and Brexit via fake news and filter bubbles, and the
polar choice between US-based “surveillance capitalism” (Zuboff
2019) and the emerging Chinese Social Credit System (SCS). While
rooted in diametrically opposite ideologies, both treat us as Skinner
rats in a Skinner cage: our behavior is closely monitored and
thoroughly controlled through exquisitely refined systems of reward
and punishment. Worse still: the SCS is increasingly exported and
adopted by other regimes, fueling the dramatic rise of “digital
authoritarianism” globally (Shahbaz 2018).
Fortunately, there remain middle grounds and bright spots. The
European Union is expanding individual privacy rights via the new
General Data Privacy Regulation (GDPR 2016). The EU is likewise
developing robust ethical guidelines for an emerging “AI for people”
(Floridi et al. 2018). France and Germany are now confronting Google
and Facebook with significant fines and anti-trust accusations,
respectively (Romm 2019; Spencer 2019). Even the otherwise
business-friendly US is moving to fine Facebook some US$5 billion for
privacy violations (Kang 2019). Moreover, more and more people are
looking beyond “the digital” for a better balance between their online
and offline lives – discussed here with the concept of a “post-digital
era.” Six years ago, “digital detox” and “mindfulness” were the
vocabulary of a few who were dismissed as cranks and Luddites: now
these are increasingly central themes among even the most techno-
enthusiastic (Roose 2019; Syvertsen and Enli 2019).
These extensive, in some ways epochal, changes have demanded major
revisions and updates in every chapter. This has meant “killing my
darlings” – many darlings. Dozens and dozens of important references
in the literatures, along with several case studies and pedagogical
exercises, have been dropped in favor of newer material throughout –
beginning with chapter 2 on privacy, as increasingly threatened by
many of these more recent developments. The reference list is now c.
30 percent larger than its predecessor, and new topics have been
added, such as “death online” in chapter 4 and sexbots in chapter 5,
along with discussion of #Gamergate and more recent empirical
evidence regarding the harms and benefits of violent and sexually
explicit materials in games.
Virtue ethics has become even more central, including its increasing
role in design of Information and Communication Technologies (ICTs)
and in EU policy development regarding AI. Affiliated developments
in “ethical design,” including “slow technology” and the Fairphone as a
case study, are added in chapter 4.
Of course, all of this will change – certainly dramatically, perhaps well
before this book is printed. At the same time, as the ongoing
applicability of these ethical frameworks and the success of this book’s
approach attest, in some ways it is also true that plus ça change, plus
c’est la même chose – the more things change, the more they remain
the same. Hence my cautious optimism and hope that, as a teaching
framework and introduction, this edition will continue to assist
students, instructors, and general readers in gaining an overview of
central ethical issues occasioned by (post-)digital media – and
enhance our ethical insights and abilities (most centrally, our capacity
for phronēsis) in ways that will help us all come to better ethical grips
with these unfolding challenges in our daily lives.
Notes
1 Roughly: the whole complex of our lives as meaning-making and
relational beings, thoroughly informed by our co-evolving
technologies (Verbeek 2017; cf. Coeckelbergh 2017).
2 To use Karl Jaspers’s concept, our existenz – as centering on
experiences of frailty, suffering, and loss, including death ([Jaspers
1932] 1970: 185, cited in Lagerkvist and Anderson 2017: 554f.). We
do all we can to avoid confronting these experiences (e.g., by
“amusing ourselves to death” [Postman 1985]); but contemporary
existential philosophers such as Amanda Lagerkvist show how our
digitally mediated experiences of existenz are essential to our fully
realizing our freedom to discern and/or create meaning for our
existence (Lagerkvist 2018; cf. Vallor 2016b: 247). Cf. Ess (2018a,
2019).
Acknowledgments
As with the previous two editions, there are simply far more people to
thank than space allows.
First of all, a thousand thanks and more to my students and colleagues
at the Department of Media and Communication, University of Oslo,
beginning with Department Heads Espen Ytreberg and then Tanja
Storsul. They, along with numerous colleagues, administrative staff,
and students, made for a very soft landing in Oslo in 2012: and in the
subsequent seven years, all of these people cultivated a collegial
environment par excellence. I am particularly grateful to Knut Lundby
for his support and mentorship, especially in the domains of
mediatization and Digital Religion.
Insofar as this book is good for students, this is due precisely to
innumerable students over the past four decades of my teaching
career. I remain deeply grateful for their contributions, beginning with
their forcing me to be as clear as possible about often complex matters.
Many have specifically commented on and critiqued early versions of
the pedagogical elements of the book. Especially my Master’s students
in our Department have been rich discussion partners and sources of
insight.
Many wise and insightful colleagues have likewise helped shape and
fill this volume. I’m especially grateful to Shannon Vallor, whose
extensive work in virtue ethics now stands as primary source and
reference. As discussed here, virtue ethics has enjoyed a remarkable
renaissance over the past decade or so – so much so as to become
central (along with deontology) to EU-level and global efforts by the
IEEE to set the ethical standards for the design of AI and the Internet
of Things. It is impossible to overstate the significance of this – for all
of us. But it has not always been so: from my perspective, no one has
done more to articulate, develop, defend, and extend virtue ethics in
these ways than Shannon. All of us owe her very great thanks indeed.
Many other colleagues, too numerous to name, have contributed via
the conferences where many of these ideas and arguments were first
introduced and worked through. These include AoIR (the Association
of Internet Researchers), IACAP (the International Association for
Computing and Philosophy), ETHICOMP (Ethics and Computing),
CEPE (Computer Ethics: Professional Enquiries), and the Robo-
philosophy conferences. The some 400+ researchers and scholars who
constituted the CaTaC (Cultural Attitudes towards Technology and
Communication) conference series (1998–2016) have been centrally
helpful for better understanding how our ethical sensibilities interact
with culturally variable factors, beginning with our conception of self.
For this volume, Soraj Hongladarom’s work (Chulalongkorn
University, Bangkok) has been especially significant: our now 20+
years of philosophical and intercultural dialogues continue to be most
enjoyable and fruitful. Maja van der Velden (Institute for Informatics,
University of Oslo) is likewise due very great thanks indeed for her
multiple contributions, several of which are incorporated here.
The list goes on. Rich Ling (Nanyang Technological University,
Singapore) offered invaluable insight into the profound and multiple
impacts of mobile devices, and thereby their ethical dimensions. Mia
Consalvo (Concordia University, Montréal, Canada) remains most
helpful concerning games and gaming. Susanna Paasonen and Kai
Kimppa (University of Turku, Finland) and J. Tuomas Harviainen
(University of Tampere, Finland) were especially generous sources of
insight and resources regarding pornography. Several AoIR list
members provided cross-cultural help on contemporary usages of CDs
and DVDs as media: Dan Burk, Danielle Couch, Aram Sinnreich, Deen
Freelon, Michael Glassman, Sam Phiri, David Banks, and Jakob
Jünger.
I am equally grateful to my Polity editors Ellen MacDonald-Kramer
and Mary Savigar, whose encouragement, support, and discipline were
essential. Two anonymous reviewers were helpfully critical in turn, for
which I am most grateful indeed.
My family continues to play the most important roles. Brother Robert
provided most helpful technical insight as well as fundamental
corporate perspectives. Sister Dianne Kaufmann remains constantly
supportive and encouraging. My wife, the Reverend Conni Ess, wisely
and consistently calls me out to the beneficent worlds of art, music,
food, and hiking: both I and this book are less nerdy as a result. Our
son Joshua has provided vital insight into both arcane technical details
and the contemporary digital and post-digital practices among
younger folk. Our daughter Kathleen, pursuing classics and religious
studies scholarship and translation, provided invaluable assistance
with both Greek philosophy and English style.
The deepest gratitude remains with my parents, Bob and Betty Ess.
They have now passed on beyond us. Like any mother, she was always
pleased with and proud of her children’s accomplishments – especially
those that sought to be of use to others. She was especially happy to
see me working on the first edition of this volume. In many ways, she
was also the person primarily responsible for my pursuing philosophy:
she loved discussing ideas and current events from a variety of
perspectives – a practice hence deeply interwoven in our lives. My
father provided unfailing care and encouragement, including the most
exemplary kind – namely, supporting my ethical and political choices
even when they differed sharply from his own. My parents’ examples
and practices thus remain the foundations of the core values
motivating this book – beginning with keen interest in different
approaches and views, and the spirit of enacting deep care for others.
Insofar as this volume reflects and helps foster such virtues – Mom,
Dad: this is for you.
CHAPTER ONE
Central Issues in the Ethics of Digital Media
Morally as well as physically, there is only one world, and we all have
to live in it.
(Midgley [1981] 1996, 119)
Chapter overview
We open with a classic case-study of cyberbullying that introduces
representative ethical issues evoked by digital media. This case-study
is accompanied by one of the primary pedagogical/teaching elements
of the book – questions designed to foster initial reflection and
discussion (for individuals, small groups, or a class at large), followed
by additional questions that can be used for further reflection and
writing.
After an introduction to the main body of the chapter, the section
“(Ethical) life in the (post-)digital age?” provides a first overview of
digital media and their ethical dimensions. I also highlight how more
popular treatments of these, however, can become counterproductive
to clear and careful ethical reflection. We turn next to some of the
distinctive characteristics of digital media – convergence, digital
information as “greased,” and digital media as communication
technologies – that occasion specific ethical issues treated in this
volume. We then take up initial considerations on how to “do” ethics
in the age of digital media. Finally, I describe the pedagogical features
of the book and provide some suggestions for how it is designed to be
used – including specific suggestions for the order in which the
chapters may be read.
Case-study: Amanda Todd and Anonymous
When Amanda Todd was 12 years old and “fooling around” with
friends, including someone looking on via a webcam, the someone
asked Amanda to show him her breasts. She lifted her top: the result
was a video and pictures that began circulating on the internet –
distributed in part as her stalker would develop a new Facebook
profile when Amanda moved to a new school. Once friended with
Amanda’s new friends, the stalker would distribute the video and
photos again, as well as send them to teachers and parents. One of the
consequences of the online stalking was offline bullying – not unusual
for young adolescents, but now laced with taunts of “porn star”
(Bleaney 2012). At one point, Amanda made her first suicide attempt:
part of the online response included a series of “jokes” facilitated by
tumblr.
Her stalker did not go away, and Amanda’s responses became more
and more desperate. In September, 2012, she posted a video on
YouTube that described her experience (www.youtube.com/watch?
v=KRxfTyNa24A). On October 10, Amanda, now 15 years old,
committed suicide. Her death – including her video – attracted
significant attention: by February, 2013, it had logged over 4 million
views, and has now been seen by tens of millions. Alongside the initial
official investigations, the group Anonymous claimed to have
identified her stalker and published his name and address: not
surprisingly, he received death threats. Meanwhile, “Amanda Todd
jokes” – and, presumably, the original pictures and video – continue
to circulate online (Warren and Keneally 2012).
FIRST REFLECTION/DISCUSSION/WRITING QUESTIONS
Amanda Todd’s experience of cyberbullying has become a classic case
and example in digital media ethics, in part because of the multiple
issues and responses it entails. In addition to cyberbullying, we will
explore the privacy issues it raises in chapter 2. We will also take a
look at two additional topics evoked here – namely, the risks of “moral
panics” in media reporting on such events, and new forms of “vigilante
justice” facilitated by internet-connected digital media.
1. Given your experiences – and those of your friends and family –
how do you react to Amanda Todd’s suicide after some three years of
cyberbullying? For example, does it seem to you that this is indeed a
serious problem for those of us living in “a digital age” – i.e., as
immersed in a world of digital media more or less seamlessly
interconnected and interwoven with our offline lives? Remember here
that part of Amanda’s difficulty was that, while she could – and did –
physically move and change schools, her stalker was always able to
find her again easily through her online profile and activities.
(A) Insofar as you agree that such cyberstalking is problematic – make
a first effort at identifying more precisely just what’s wrong here. Of
course, there are a wide range of ethical points you can make –
http://www.youtube.com/watch?v=KRxfTyNa24A
beginning with the exploitation (including sexual exploitation) of
vulnerable persons (certainly including young girls, but plenty of
young boys get bullied as well) by more powerful ones. Moreover, it
seems clear that, if Amanda deserved privacy and anonymity – as we
will see, argued by deontologists as basic rights of persons – she was
not able to have such rights in her online environments. As a last
suggestion, what about the ongoing taunts and “jokes” that circulated
– and still circulate – in connection with Amanda’s video and suicide:
are these sorts of responses ethically problematic, in your view,
and/or, as a utilitarian might argue, simply the price to be paid for free
speech online?
(B) Whatever your responses to “(A),” now go back and do your best to
provide whatever reasons, grounds, feelings, and/or other sorts of
claims and evidence that you can offer at this stage to support these
first points.
2. A common phenomenon in reporting on new technologies in “the
media” is that of a “moral panic” (Drotner 1999). That is, stories are
often developed around sensational – and so very often the sexual –
but risky possibilities of a new technology. Sometimes a panic ensues
– e.g., cries for new efforts somehow to regulate or otherwise restrain
clearly undesirable behaviors and consequences. Such panics are not
always misplaced: they can sometimes inspire responses and changes
that may effectively improve our social and ethical lives. But for us, the
difficulty is that such a “moral panic” reporting style has us frame (if
we don’t think about it too much) new technologies and their
possibilities in an “either/or” dilemma: we are caught between having
to reject new technologies – e.g., as they lead, in this case, to the
stalking and suicide of a young girl – or defending these technologies
wholesale (as, for example, the US National Rifle Association finds
itself compelled to do in the wake of every new school shooting: Pane
2018).
Reflect on some of the examples of media coverage given here, as well
as others that you can easily find on your own, perhaps with the help
of the Wikipedia article on Amanda Todd
(https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd). Compare
these more popularly oriented accounts with more empirical research
https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd
on cyberbullying, e.g.:
Sonia Livingstone, Lucyna Kirwil, Christina Ponte, and Elisabeth
Staksrud (2014). In their own words: What bothers children online?
European Journal of Communication, 29(3), 271–88. DOI:
10.1177/0267323114521045.
Global Kids Online (2018).
The Pew Research Center. www.pewinternet.org/2018/09/27/a-
majority-of-teens-have-experienced-some-form-of-cyberbullying.
Given the realities of young peoples’ experiences online (which, be
sure to notice, vary considerably from country to country), does it
seem to you that more popular coverage provides a much needed and
useful service in calling our attention to the sorts of social and ethical
problems that new media make possible? And/or: do you see any risks
here of such coverage falling into a “moral panic” style of reporting?
Either way, the key point is to provide evidence – including examples
(carefully cited, please) that support your claims and observations.
3. Especially in the face of what seems to be (a) the clear injustice of
stalkers and pedophiles using internet-connected digital media and
the sorts of anonymity afforded in online communication, including
popular social network sites (SNSs), to harass young people to the
point of suicide, vis-à-vis (b) at least the initial inability of “traditional”
law-enforcement agencies to identify and track down such
perpetrators, it is tempting to applaud the efforts of Anonymous to do
what the authorities apparently can’t. But, in this instance, rather than
speeding up justice, the “trial by Internet” – beginning with the
“outing” of the alleged stalker online, followed by quick condemnation
– resulted in a second injustice. Despite their prodigious hacking
abilities, Anonymous apparently erred, and the wrong man was
targeted with death threats and other harassment (Warren and
Keneally 2012).
(A) How do you respond to this set of problems? That is, does it
sometimes seem justified for groups such as Anonymous to intervene
in such cases – i.e., when the legal authorities initially appeared to lack
http://www.pewinternet.org/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying
the technical sophistication needed to track down stalkers such as the
one who pursued Amanda Todd? And/or: might the risks of such “trial
by Internet” – beginning with the erroneous accusation of the wrong
person – outweigh its possible benefits (such as – occasionally –
getting the right person when the authorities can’t)?
Again, the key point is to provide support for your claims and
observations, beginning with evidence (e.g., how often does a group
such as Anonymous succeed where others fail?) and arguments that
will hold up to critical scrutiny.
(B) In January 2014 (slightly over a year after her suicide in October
2012), Dutch police arrested Aydin Coban. Amanda Todd is alleged to
be but one of his more than 30 victims; following Coban’s trial and
conviction in the Netherlands on charges of internet fraud and
blackmail, he is to be extradited to Canada to face charges related to
the Amanda Todd case
(https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd).
How do these subsequent developments affect or change (if at all) your
initial reflections and arguments above on Anonymous and “trial by
internet?” For example, given these more promising outcomes through
the work of law enforcement authorities – when given enough time –
what happens to initial (more short-term?) arguments in favor of “trial
by internet?” Alternatively: what if, in the subsequent seven years
following Todd’s death, these authorities had in fact failed to come up
with a likely suspect and evidence to bring him or her to trial? The
larger point here is to begin to reflect on how far into the future our
ethical decision-making must stretch – e.g., in order to consider
possible consequences several years down the road that might affect
our current ethical decisions and judgments. (This is an important
consideration in the discussion of utilitarianism in chapter 6.)
Introduction
Most certainly, in the industrialized world, our lives are inextricably
interwoven with what are sometimes called “New Media” or digital
media. Current generations are sometimes referred to as “digital
https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd
natives,” indicating that they have been born into and grown up in a
world saturated with these technologies. More broadly, an influential
European Commission “Digital Futures” project used in its title the
term “Onlife,” as developed by information philosopher Luciano
Floridi (2015) to highlight how the once distinct domains of “life
online” and “life offline” are now (more or less) seamlessly interwoven
in an “Onlife.” At the same time, contemporary media coverage of
digital media frequently highlights important, often frightening,
ethical issues these entanglements entail. Beyond our opening
examples of cyberbullying and “trial by Internet,” it is easy to find
stories highlighting how violence in games appears to lead to horrific,
real-world violence, ranging from school shootings to the July 22,
2011, killings in Norway, including 69 young people on the island of
Utøya (Daily Mail Reporter 2012). Similarly, the long-standing debate
over whether pornography consumption results in increased sexual
aggression, especially toward women and girls, continues (e.g. Wright,
Tokunaga, and Kraus 2016). More broadly, numerous episodes and
developments have forced attention to how our immersion in digital
media technologies renders us vulnerable to massive state and
corporate surveillance and manipulation. Think: Edward Snowden
and the US National Security Agency (Dahlberg 2017); Facebook’s
secret mood manipulation of nearly 700,000 users (Kramera,
Guillory, and Hancock 2014); foreign actors’ interference with
elections and campaigns, including sophisticated hacking attacks
along with “fake news” distributed along increasingly polarized “filter
bubbles” fostered by social media (Pariser 2011); and a range of
Facebook scandals, such as the discovery that Cambridge Analytica, a
data firm affiliated with Donald Trump’s election campaign, scraped
otherwise private data from some 87 million Facebook users for the
sake of targeting and manipulating voters (e.g. Confessore 2018). And
so on.
These are certainly critical ethical issues, ones that will only become
more complex and pressing as our digital environment continues to
expand and evolve – most obviously, through ever greater collection
and analysis of our personal data in so-called Big Data approaches that
will be fed ever more data about us in the emerging Internet of Things.
Of equal importance: consider the increasing development and usage
of AI technologies and algorithms – whether in the form of
recommender systems in our shopping and musical choices, or, more
darkly, in increasing use of so-called pre-emptive policing systems that
use AI and Big Data collections to predict individual criminal acts
before they occur (Hildebrandt 2015, 191–9). Perhaps most ominously
– at least for those of us who still hold to ideals of individual freedom
and democratic norms and processes – such systems seem to drive
inevitably toward Western equivalents of the Chinese Social Credit
System (SCS). In its final form, SCS will use multiple technologies of
surveillance and data mining to “assess citizens, businesses and other
organizations in China with regard to their creditworthiness,
adherence to law, and compliance with the government’s ideological
framework” (Kostka 2018, 2). In light of such developments, it is no
exaggeration to worry that our digital technologies may result in
nothing less than “the end of law” as it has developed in modern
democracies – where such law rests upon and primarily defends
individual freedom and affiliated democratic norms (justice, fairness,
equality) and rights – including rights to privacy, freedom of
expression, and, most radically, rights to resist, contest, and disobey
(Hildebrandt 2015, 10).
We – meaning everyone who makes use of, and is dependent upon,
such digital technologies – are thereby confronted with a staggering
range of ethical issues. This is to say: these issues – whether
cyberbullying and pornography or foundational threats to privacy and
democracy – present us with possible conflicts with our basic ethical
norms, values, or principles; they thereby urge us to consider one or
more alternative choices or routes of action in order to resolve the
conflict. Many of these issues require the insight and assistance of
professionals such as computer and data scientists, ICT designers, and
philosophers specialized in these matters: but life in a (post-)digital
era means that all of us are confronted with such issues, as inevitably
catalyzed by our technologies.
These (and more) are compelling and urgent issues. Here, however, we
can explore only a few, beginning with privacy (chapter 2). It is also
important to notice how these issues are not solely pressing ethical
concerns. In addition, some of the stories and accounts of these
(including some of the references included above) illustrate a tendency
in popular media to call our attention to such issues in the frame of a
“moral panic” (Drotner 1999). That is, in order to attract our attention,
such stories sometimes simplify and sensationalize (and, whenever
possible, highlight the sexual). They thereby appeal to a deep-seated
fear in modern Western societies that our new technologies are
somehow getting out of control. This fear has been thematic in the
modern West since E. T. A. Hoffmann’s Der Sandmann ([1816] 1967)
– an early story about a seductive robot – and Mary Shelley’s
Frankenstein ([1818] 1933). These stories and accounts highlight the
fear that such new technologies will corrupt our ethical and social
sensibilities.
These more popular approaches – in contrast with the far more
nuanced and careful reflections of ethicists, philosophers of
technology, and our colleagues in the relevant technical fields – appear
to influence how “the rest of us” think and feel about these issues as
they affect our own lives and existence. So it is important to first
examine how “moral panic” reporting both furthers and frustrates
careful ethical reflection on digital media. On the one hand, such
reporting usually succeeds in getting our attention – and is thereby
useful as it catalyzes more careful reflection on important ethical
issues. On the other hand, by highlighting the negative effects and
potentials of digital media, such reporting fosters a polarized way of
thinking – a framework of “technology good” (because it brings us
important benefits) vs. “technology bad” (because it threatens the
moral foundations of society, most especially the morality of young
people). As we will see, such simple either/or frameworks for
reflecting on important ethical issues are simply misleading. Rather –
and as most of us likely already know full well – whatever truths may
be discerned about the ethics of digital media are more complex and
often lie somewhere in the middle between these two extremes. But if
presented only with the simple choice between “technology good” and
“technology bad,” we may not look for further alternatives: hence, we
get needlessly stuck in trying to decide between two compelling
choices. Getting stuck this way short-circuits, that is, the more careful
and extensive reflection required if we are to move beyond such
either/or thinking.
So we begin by examining more carefully some of the important
characteristics of digital media, along with the specific sorts of ethical
issues that these characteristics often raise for us.
(Ethical) life in the (post-)digital age?
In keeping with their increasingly central importance in our lives,
“digital media” are the subject of an ever-growing range of analyses in
a number of disciplines (e.g., Couldry 2012; Davisson and Booth
2016). At the same time, there has been something of a popular turn in
our experiences with and sensibilities toward digital media in recent
years. Broadly, a largely optimistic assumption that new technologies
would make our lives better in many ways – whether as consumers
satisfied with the latest convenience of, say, a voice-activated digital
assistant or smart home, and/or as citizens in a world of increasing
individual and collective freedom, democracy, and prosperity – is
increasingly overshadowed by darker developments, such as the
Cambridge Analytica scandal (Solon 2017). At the same time, more
and more of us are becoming aware of how “our minds can be
hijacked” (Lewis 2017) – in part, as more and more “tech dissenters,”
including Justin Rosenstein, the coder who invented Facebook’s “like”
button, have become increasingly and publicly critical of the very
technologies they themselves have built.
Lastly, since as early as 2000 (Cascone), an increasing number of
scholars and researchers argue that we are now living in a post-digital
era (e.g., Berry 2014; Lindgren 2017; Ess 2019). Some obvious
markers of this era are the increasing popularity of primarily analogue
technologies, including analogue film, vinyl records, and rising
interest in board games (Birkner 2017); we will explore additional
examples, such as “slow technology” and “digital detox” (chapter 4).
To be clear: “post-digital” does not mean “anti-digital.” It signals,
rather, a broader shift from an exclusive focus on “the digital” – to the
exclusion of “the analogue” – to a more nuanced balance and
recognition of the roles and importance of each in our lives.
At the same time, digital media represent strong continuities with
earlier forms of analogue communication and information media: the
latter include printed books, journals, and newspapers, what we now
call “hardcopy” letters, and, for example, traditional forms of mass
media such as newspapers and “one-to-many” broadcast media such
as radio and TV. We will note and explore these continuities more fully
in our efforts to evaluate one of the larger ethical questions we will
confront – namely, do digital media present us with radically new
kinds of ethical problems that thereby require absolutely new ethical
approaches? Such questions are often driven by emphasizing instead
important differences between earlier media and digital media. Such
an emphasis, however, also drives the either/or approach underlying
much popular media reporting. In any event, these differences often
are part of why new ethical issues come up in conjunction with digital
media. Exploring these differences at the outset is hence a good
starting point.
Three especially relevant characteristics of digital media are: how
digital media foster convergence; digital information as “greased”;
and digital media as ubiquitous and global communication media.
1. Digital media, analogue media: convergence and
ubiquity
To begin with, digital media work by transforming extant information
(e.g., voices over a phone, texts written on a word-processor, pictures
of an impressive landscape, videos recorded and broadcast, etc.) into
the basic informational elements of electronic computers and
networks, using binary code (1s and 0s – bits on and off). By contrast,
analogue media, such as increasingly popular vinyl records, capture,
store, and make information accessible by producing specific material
artifacts that are like (analogous to) the original. Music recording
equipment, for example, begins with microphones that translate the
vibrations of an original sound into magnetically stored information,
corresponding to specific sound pitches and volumes; this is then
“written” onto a tape that passes by a recording head at a specific
speed. These analogues of an original sound are in turn transformed
into further analogues: they are mechanically carved onto the grooves
of a vinyl record in the form of bumps and valleys that correspond to
the high and low frequencies and volumes of the original sound. These
physical variations are then translated by a phonograph needle back
into electronic impulses that likewise mimic the original variations of a
sound. Finally, these impulses are turned into sound once more by an
amplifier and speaker(s) – again, as an analogue or copy of the
original that, ideally, is as close to the original as possible.
One of the reasons digital media are so attractive is that analogue
media, by contrast, always involve some loss of information across the
various processes of collecting, recording, and storing it. This means –
and this is particularly critical to the ethical discussions of copying –
that each analogue copy of an original is always less true to the
original; and the more copies that are made – e.g., a tape copy of a
record as a copy of a tape of an original performance – the less faithful
(and satisfying) the resulting copy will be. By contrast, once
information is transcribed into digital form, each copy of the digital
original will be (more or less) a perfect replica of the original. Copy an
MP3 version of your favorite song a thousand times and, if your
equipment is working properly, there will be no difference between the
first copy and the thousandth.
Even more importantly, analogue media are strongly distinct systems:
how information is captured and replayed on a vinyl record is not
immediately compatible – and hence not easily exchangeable – with
how information is captured and replayed in a newspaper or printed
book. But once information is translated into digital form, such
information – whether destined for an MP3 player as an audio
recording or a word-processor as text – can be stored on and
transmitted through a shared medium. Hence the same computer or
smartphone can capture, create, process, and distribute digital photos
and music, along with a thousand other forms of information held
distinct in analogue media, from simple emails to word-processing
files to maps to … “you name it.”
To be sure, these distinctions between analogue and digital media are
only one side of the coin. As advocates of the post-digital remind us
(Cascone 2000; Berry 2014), however much our media technologies
have changed in recent decades, the human eyes, ears, and voices have
not: we as embodied beings still generate and receive information in
resolutely analogue form. The digital codes, for example, that pass
between two computers or smartphones, whether in the form of a
Skype call, Facebook update, or phone call, begin and end for their
human users as analogue information. The emergence of “the digital,”
in short, does not mean the quick and complete end of “the analogue”
(cf. Massumi 2002). This is critical to keep in mind especially from an
ethical perspective: as digital media build on and enhance – rather
than replace – our analogue modes of communication and
experiences, they thereby call into play experiences and
communication that have been part and parcel of human ethical
reflection and frameworks for millennia. This is good news, ethically.
That is, it is sometimes argued – and tempting to think – that the
ethical experiences and challenges of digital media are so strikingly
new that they require entirely new frameworks (e.g., Braidotti 2006).
But these continuities with our experiences as analogue and embodied
beings argue that the emergence of digital media does not require us to
throw out all previous ethical reflections and views and somehow try
to start de novo – from the beginning. On the contrary, we will see
several examples of how older forms of ethical reflection (perhaps,
most notably, virtue ethics) – however transformed through their
applications within digital media – are often key in helping us analyze
and successfully resolve contemporary ethical dilemmas.
Nonetheless, as once-distinct forms of information are translated into
a commonly shared digital form, this establishes one of the most
important distinguishing characteristics of digital media – namely,
convergence (Jenkins 2006). Such convergence is literally on display
in a contemporary webpage containing text, video, and audio sources,
as well as possibilities for sending email, remotely posting a comment,
etc. These once-distinct forms of information and communication are
now conjoined in digital form, so that they can be transmitted entirely
in the form of 1s and 0s via the internet. Similarly, a contemporary
smartphone exemplifies such convergence: as a highly sophisticated
supercomputer, it easily handles digital information used for a built-in
camera (still and/or moving video), audio and video players, a web
browser, GPS navigation, and many other sorts of information. (Oh
yes, it will also make phone calls.)
Digital media thus conjoin both traditional and sometimes new sorts
of information sources. In particular, what were once distinct kinds of
information in the analogue world (e.g., photographs, texts, music)
now share the same basic form of information. What does this mean,
finally, for ethics? Here’s the key point: what were once distinct sets of
ethical issues now likewise converge – sometimes creating new
combinations of ethical challenges that we haven’t had to face before.
For example, societies have developed relatively stable codes and laws
for the issue of consent as to whether or not someone can be
photographed in public. (In the US, generally, one can photograph
people in public without asking for their consent, while, in Norway,
consent is required.) Transmitting that photo to a larger public – e.g.,
through a newspaper or a book – would then require a different
information system, and one whose ethical and legal dimensions are
addressed (however well or poorly) in copyright law. But, as many
people have experienced to their regret, a contemporary smartphone
can not only record their status and actions, but further (more or less
immediately) transmit the photographic record to a distribution
medium such as Snapchat or an even more public website (e.g., as in
revenge porn). The ethics of both consent in photography and
copyright in publication are now conjoined in relatively novel ways.
(In fact, technological convergences toward the end of the nineteenth
century – specifically, the ability of newspapers to print photographs –
occasioned some of the foundational arguments for privacy in the
contemporary world. This innovation led to the demand for celebrity
photos – and thereby intrusions into the lives of the famous that
violated “the obvious bounds of propriety and of decency” (Warren
and Brandeis 1890, 195, cited in Glancy 1979, 8).
2. Digital media and “greased information”
A second characteristic of digital media is that digital information is
“greased.” That is, as James Moor (1997) has observed, “When
information is computerized, it is greased to slide easily and quickly to
many ports of call” (27). As anyone who has hit the “post” button on a
status update too quickly knows all too well, information in digital
form can spread more or less instantaneously and globally, whether
we always want it to or not.
As the example of uploading embarrassing photos or videos from a
smartphone suggests, the near-instantaneous and potentially global
distribution of digital information raises especially serious ethical
issues surrounding privacy. Where it was once comparatively difficult
to capture and then transmit information about a person that she or
he might consider private, digital media, beginning with computer
databases that store and make easily accessible a vast range of
information about people, have resulted in an extensive spectrum of
new threats to personal and private information. Moreover, digital
information as “greased” likewise makes it easy to copy and distribute,
say, one’s favorite songs, movies, or texts. To be sure, it has always
been possible to copy and distribute copies of a given text, song, or
film. But the ease of doing so with digital media is a primary factor in
the central problems of copying, copyright, and so on.
3. Digital media as communication media: fluidity,
ubiquity, global scope, and selfhood/identity
The emergence of digital media – along with the internet and the Web
as ways of quickly transporting digitized information – thus gives rise
to strikingly new ways of communicating with one another at every
level. Emails, SNSs (Facebook, Twitter, Snapchat, etc.), photo and
video distribution sites (YouTube, etc.), and personal blogs provide
ways for people – especially in the developed world, but also
increasingly in developing countries – to enhance existing
relationships and develop new ones with persons often far removed
from their own geographical/cultural/linguistic communities.
Especially as the internet and the Web now connect over half of the
world’s population (Internet World Stats 2018), they thereby make
possible cross-cultural encounters online at a scope, speed, and scale
unimaginable even just a few decades ago.
Along these lines, two additional features of digital media become
crucial. To begin with, digital media enjoy what Phil Mullins (1996)
has characterized as a kind of fluidity: specifically, a biblical text in
digital form – either on one’s smartphone or as stored on a website –
becomes, in his phrase, “the fluid Word.” In contrast to a biblical text
as fixed in a strong way when inscribed on parchment (the Torah)
and/or printed on paper, a biblical text encoded on a flash memory or
server hard drive in the form of 1s and 0s can be changed quickly and
easily. This fluidity is highlighted by a second characteristic of digital
communication media – namely, interactivity. Both a printed Bible
and the daily newspaper are produced and distributed along the lines
of a “top-down” and “one-to-many” broadcast model. While readers
may have their own responses and ideas, they can (largely) do nothing
to change the printed texts they encounter. By contrast, I can change
the biblical text on my smartphone if I care to (e.g., if I think a
different translation of a specific word or phrase might be more
precise or illuminating) – and, by the same token, a community of
readers can easily amend and modify an online text; they might also
be able to post comments and respond to a given text in other ways
that are in turn “broadcast” back out to others. (Such matters, along
with many others evoked by digital media, are the foci of Digital
Religion, a now mature field of internet studies: Campbell 2017.) In
other words, digital communication media offer multiple new
possibilities of “talking back”: posting comments, or even a blog, in
response to a newspaper story, now reproduced online; voting for a
favorite in a TV-broadcast contest by way of SMS messaging;
organizing “smart mobs” via the internet and smartphones to protest
against – and, in some cases, successfully depose – corrupt politicians,
etc.
Secondly, the diffusion of internet and Web-based connectivity by way
of smartphones and other digital devices (e.g., the sensor devices a
jogger wears to track and record a run in exquisite detail, including
precise location, time, speed, etc.) makes increasingly real for us the
ubiquity of digital media. We are increasingly surrounded by an
envelope of interacting digital devices – meaning first of all that we are
“always on,” always connected (unless we take steps to go offline –
steps that are increasingly difficult to accomplish but also increasingly
recognized as important to our health and well-being in a post-digital
era, e.g. Roose 2019). The ubiquity of our interactive devices means
that we are increasingly both the subjects and the objects of what
Anders Albrechtslund (2008) early on identified as “voluntary
surveillance.” To be sure, such voluntary or lateral surveillance can
certainly be enjoyable, even life-saving – e.g., as we keep up with
distant friends and family through a posting on a social networking
site such as Facebook. At the same time, however, the mobile or
smartphones we carry with us into more or less every corner of our
lives – including the (once) most intimate spaces of the bathroom and
the bedroom – open up our lives in those spaces to new possibilities of
tracking and recording in exquisite detail.
On the one hand, social scientists (among others) can thereby use
smartphones as primary conduits into the lives of their informants and
subjects of study – often on a massive scale. Such research – especially
as enhanced through Big Data collection and AI-/algorithmic
techniques of analysis – has dramatically expanded our insights into
just about every facet of human behavior (for an overview, Ling 2017).
On the other hand, carrying these devices renders us immediately
vulnerable to governmental and corporate surveillance, various forms
of governmental and private actors’ hacking (e.g., the phone hacking
scandal in the UK – CNN 2018), parental efforts to track their children
(Gabriels 2016), partners’ ability to track one another’s sexual
activities and infidelities (Danaher, Nyholm, and Earp 2018), to
engage in sexting as well as revenge porn, etc. In particular, as we will
explore more fully below, when such surveillance is not voluntary, our
online and offline lives risk becoming more and more like those in a
medieval village in which “everybody knows everything about
everybody.” As the phenomena of “trial by Internet” and cyberbullying
make clear, our increasing inability to hide or get away from those who
seek to do us harm in such a medieval village – including, worst case, a
self-righteous mob inspired by unproven allegations, for example, of
sexual assault – opens up a number of critical ethical (and political)
concerns (Jensen 2007).
Moreover, our personal data are being collected in ever increasing
amounts through the emerging “Internet of Things” (IoT) – e.g., in the
name of so-called Smart Cities which promise greater energy
efficiencies, better traffic flow, etc., through constant monitoring of
individuals and our devices (including, for example, our cars, our
electric meters, our smart assistants, and so on), coupled with a
growing web of cameras and sensors embedded in the environment
around us. It is not difficult to see that the IoT thereby presents still
more threats to individual and group privacy (e.g., Rouvroy 2008;
Bunz and Meikle 2018, 123–5) – especially as the IoT threatens to
easily morph into a total surveillance system, as exemplified in the
Chinese SCS.
Thirdly, fluid and interactive digital media enjoy a global scope, which
leads to still more urgent ethical issues. Our communications can
quickly and easily reach very large numbers of people around the
globe: like it or not, our use of digital technologies thus makes us
cosmopolitans (citizens of the world) in striking new ways. We are
forced to take into account the various and often very diverse cultural
perspectives on the ethical issues that emerge in our use of digital
media. So I will stress throughout this book how the assumptions and
ethical norms of different cultures shape specific ways of reflecting on
such matters as privacy (chapter 2), copyright (chapter 3),
pornography, sexbots, and violence (chapter 5).
Finally, our engagements with digital media have consequences for
nothing less foundational than our most basic conceptions of selfhood
and identity – of who we are as human beings. To be sure, questions
such as “Who am I – really?” and “Who ought I to be?” are among the
most abstract and difficult ones we can ask as human beings. Indeed,
outside of an occasional philosophy class or, perhaps, a mid-life crisis,
we may rarely raise such questions with the sort of sustained attention
and informed reflection that they deserve and require. But there are
strong theoretical and urgently practical reasons for taking up such
questions here. To begin with, the Medium Theory developed by
Harold Innis, Elizabeth Eisenstein, Marshall McLuhan, Walter Ong
(1988), and Joshua Meyrowitz (1985), and, more recently, Naomi
Baron (2008) and Zsuzsanna Kondor (2009), demonstrates strong
correlations between our diverse modalities of communication and
our sense of selfhood. These correlations begin with the stage of
orality and what is characterized as a relational sense of selfhood:
such a self is made up of and thus dependent upon multiple
relationships – beginning with the family (as child, sibling, cousin,
etc.) and then the larger social relationships that define one. The
emergence of literacy appears to correlate with more individual
understandings of selfhood – so much so that Foucault has
characterized writing as a “technology of the self” (1987, 1988).
Emphases on individual aspects of identity further emerge in
conjunction with the printing press and the expansion of literacy-
print, initially via the Protestant Reformation, and then as underlying
both much of modern ethical theory and political theories justifying
democratic regimes. With the rise of the “secondary orality” of electric
media – beginning with radio, movies, and TV and then extending into
the age of networked digital media – there appears to be a shift in
Western societies (back) toward more relational emphases of selfhood
and identity (Ess 2010, 2012, 2017a). There are also important middle
grounds here – namely, conceptions of the self as a relational
autonomy that conjoin more individual emphases on freedom
(autonomy) and the realities of our relationships with one another:
relational autonomy is applied, for example, in recent critiques of so-
called Quantified Relationship (QR) apps (Martens and Brown 2018).
It is a commonplace in philosophy that our sense of human nature and
selfhood drives our primary ethical assumptions and frameworks. In
particular, we will begin exploring more fully below how questions of
identity immediately interact with our most basic assumptions
regarding ethical agency and responsibility. We will further see in our
ethical toolkit (chapter 6) that our emphases on either more individual
or more relational aspects of selfhood and identity are definitive for
(more individually oriented) utilitarian and deontological ethics, in
contrast with (more relationally oriented) virtue and feminist ethics
and the ethics shaped by Buddhist, Confucian, and African traditions,
for example. Like it or not, while questions of identity are, again,
among the most difficult we can raise and seek to resolve, our
responses to those questions are crucial if we are to make coherent
choices regarding the ethical frameworks we think best suited to help
us analyze and resolve the ethical challenges evoked by digital media.
Lastly, our assumptions regarding identity and selfhood have
immediate significance for how we begin to think about the nature of
privacy – specifically, if what we feel and think we need to protect is a
more individual and/or more shared or collective sense of privacy
(chapter 2). Similar questions hold for our understandings of who
should have – and should not have – access to our intellectual
property: i.e., whether we hold to more traditional (meaning, more
individual and exclusive) conceptions of property, so that we transfer
rights to its use to others only in exchange for monetary or other sorts
of considerations, or to more inclusive notions of property, e.g. as an
inclusive good to be shared freely, as we routinely do when giving
copies of our favorite music and films to friends, for example (chapter
3). By the same token, our underlying notions of selfhood and identity
will prove critical to our analyses of the issues surrounding friendship,
death online, and democracy (chapter 4) and those evoked by
pornography and violence in digital environments, including sexbots
(chapter 5).
Digital media ethics: How to proceed?
At first glance, developing such an ethics would seem to be an
impossible task. First of all, digital media often present us with
strikingly new sorts of interactions with one another. So it is not
always clear whether – and, if so, then how – ethical guidelines and
approaches already in place (and comparatively well established) for
traditional media would apply. But again, as emphasized in the term
“post-digital,” digital media remain analogue media in essential ways
– the music arriving at our ears remains analogue, etc. And so the
lifeworlds of human experience that digital media now increasingly
define remain connected with the analogue lifeworlds of earlier
generations and cultures: this means that there remain important
continuities with earlier ethical experience and reflection as well.
In addition, digital media as global media thus force us to confront
culturally variable views – regarding not simply basic ethical norms
and practices but, more fundamentally, how ethics is to be done. In
particular, we will see that non-Western views – represented in this
volume by Confucian, Buddhist, and African perspectives – challenge
traditional Western notions of the primary importance of the
individual, and thereby Western understandings of ethical
responsibility as primarily individual responsibility. That is, while we
in the West recognize that multiple factors can come into play in
influencing an individual’s decision – e.g., to tell the truth in the face
of strong pressures to lie, to violate another’s rights in some way, etc. –
we generally hold individuals responsible for their actions, as the
individual agent who both makes decisions and acts independently of
others. But, these days, our interactions with one another
predominantly take place via digital media and networks. This means,
more specifically, that multiple actors and agents – not only multiple
humans (including software designers as well as users) but also
multiple computers, networks, bots, etc. – must work together to make
specific acts (both beneficent and harmful) possible. Hence, in parallel
with the distribution of information via networks, our ethical
responsibility may be more accurately understood in terms of a
distributed responsibility (Simon 2015). That is, ethical responsibility
for our various actions via digital media and networks is “stretched”
across the network. This understanding of distributed responsibility is,
in fact, not an entirely new idea; rather, it is one shared with both pre-
modern Western philosophies and religions and multiple philosophies
and religions around the globe.
Certainly, this is a Very Good Thing: it suggests important ethical
norms and practices that can be shared among the multiple cultures
and peoples now brought into digital communication with one
another. But it represents a major challenge, especially, to Western
thinkers used to understanding ethical responsibility in primarily
individualistic terms.
Is digital media ethics possible? Grounds for
hope
These challenges are certainly daunting. Indeed, when we first begin
to grapple with digital media ethics, especially with a view toward
incorporating a range of global perspectives and changing notions of
selfhood and responsibility, the tasks before us may seem to be
overwhelming and perhaps simply futile. But both our collective
experience with earlier technological developments and more recent
experience in the domain of information and computing ethics (ICE)
suggest that, despite the considerable challenges of developing new
ethical frameworks for new technologies, we are nonetheless able to do
so. Indeed, this experience provides us with a number of examples of
ethical resolutions that “work” both globally (as they involve
discerning shared norms and understandings) and locally (as they
further involve developing ways of interpreting and applying shared
norms in specific cultural contexts – and thereby preserving the
distinctive ethical differences that define diverse cultural identities).
As a primary example: the European Union has drawn up and now
implemented more rigorous privacy protections than were defined
under previous data regulations (GDPR 2016; Berbers et al. 2018). In
2015, the European Data Protection Supervisor (EDPS) established an
Ethics Advisory Group, assigned to develop a “new digital ethics” to
help guide the specific implementations of the GDPR, including
sustaining the rigorous EU privacy protections vis-à-vis the emerging
Internet of Things and growing uses of Artificial Intelligence (AI). This
new digital ethics, however, turns squarely on two ethical frameworks
we have begun to explore here – namely, deontology (roughly, an
insistence on human autonomy and thereby basic rights, including the
right to privacy) and virtue ethics (briefly, a focus on achieving good
lives of flourishing through the development of our best capacities).
For example, the EDPS Ethics Advisory Group (EAG) foregrounds the
central importance of autonomy and freedom, including as these are
grounded in the philosophical work of Immanuel Kant (Burgess et al.
2018, 16). Similarly, both the EAG report and the more philosophical
account of the key ethical pillars of a “Good AI Society” (Floridi et al.
2018, 689f.) foreground the central aims of virtue ethics – namely,
flourishing, well-being, and good lives (Burgess et al. 2018, 21; Floridi
et al. 2018, 690f.). At the same time, these ethical frameworks are
(also) applied in a pluralistic fashion. So the EAG asserts that basic
norms and values – such as autonomy, dignity, equality, and so on –
are both central “to the European project” (Burgess et al. 2018, 16).
Indeed, these are claimed to be universal – while recognizing that
“these values must be understood and implemented in the social,
cultural, political, economic and not least, technological contexts in
which the crucial link between personal data and personal experience
is made” (Burgess et al. 2018, 9).
Similar comments hold for the long-term experience of the Association
of Internet Researchers’ (AoIR’s) development of internet research
ethics guidelines since 2000 (Ess 2017b). Taken together, these
examples suggest that digital media ethics – as likewise requiring us to
address the ethical dimensions evoked by developing new
technologies, including how these implicate diverse cultural norms
and traditions – is nonetheless a doable project.
Moreover: extensive evidence argues that with few exceptions, as
enculturated human beings, we are already deeply ethical (at least by
the time you are reading a book such as this). In Aristotelian terms,
you are already experienced with confronting ethical difficulties; you
are already equipped with important foundations and, most
importantly, phronēsis as a central skill of ethical judgment (more on
this below). Be of good courage!
How to do ethics in the new mediascape:
Dialogical approaches, difference, and
pluralism
These examples of the AoIR guidelines and recent EU law and ethics
further offer important suggestions for how to proceed – specifically,
as both examples share two elements in common. To begin with, they
each incorporate what we can think of as dialogical approaches –
approaches that emphasize the importance of listening for and
respecting differences between our diverse ethical views.
Ordinarily – especially if our thinking is shaped by a polarized
either/or common in popular media reporting – we tend to
understand the difference between two views in only one possible way:
if the two views are different, one must be right and the other wrong.
Again, as we will explore more carefully in chapter 6, such approaches
are called ethical absolutism or ethical monism. These may work well
in certain contexts and with regard to some ethical matters. But,
especially in a global context, a severe consequence of such ethical
monism is to force us into thinking that one – and only one –
particular ethical framework and set of norms and values (usually,
those of the culture[s] in which we grew up) are right, and those that
are different can only be wrong.
In the face of such monism and its intolerance of different views, we
are often tempted to take a second position – one called ethical
relativism. Ethical relativism argues that beliefs, norms, practices,
frameworks, etc., are legitimate solely in relation to a specific culture,
time, and place. In this way, ethical relativism allows us to avoid the
intolerance of ethical monism and to accept all views as legitimate.
Such an approach is especially attractive as it prevents us from having
to judge among diverse views and cultures: we can endorse all of them
as legitimate in at least a relative way (i.e., relative to a specific culture,
etc.).
But the examples of ethical pluralism in both internet research ethics
and EU law and ethics surrounding privacy and data privacy
protection show how such pluralism stands as a third possibility – one
that is something of a middle ground between absolutism and
relativism. That is, to begin with, such pluralism avoids the either/or
of ethical monism – an either/or that forces us to choose between two
different views, endorsing one as right and the other as wrong. Rather,
pluralism shows how different views may emerge as diverse
interpretations or applications of shared norms, beliefs, practices, etc.
To be sure, not all of our differences can be resolved so neatly; but,
when pluralism succeeds, the differences between two (or more) views
thus do not force us to accept only one view as right and all the others
as wrong. Rather, we can thereby see that many (but not necessarily
all) different views may be right, insofar as they function as diverse
interpretations and applications of shared norms and values.
In addition, ethical pluralism thereby overcomes a second either/or –
namely, the apparent polarity between ethical monism and ethical
relativism themselves. That is, when we first encounter these two
positions – and, once more, especially if our thinking has been shaped
by prevailing dualities in the thinking of those around us, including
popular media reports – our initial response may again be either/or:
either monism is right or relativism is right, but not both. In important
ways, ethical pluralism says that both are right – and both are wrong.
From a pluralist perspective, monism is correct in its presumption that
universally valid norms exist, but mistaken in its insistence that the
differences we observe between diverse cultures in terms of their
practices and behaviors must mean that only one is right and the rest
are wrong. Similarly, from a pluralist perspective, ethical relativism is
correct in its attempt to endorse a wide range of different cultural
norms and practices as legitimate, but mistaken, first of all, in its
denial of universally valid norms.
We will explore these theories of absolutism, relativism, and pluralism
in more detail in chapter 6. Here it suffices simply to introduce these
possibilities of thinking in an initial way to help us move beyond the
either/or thinking that tends to prevail in popular media – and
thereby, perhaps, our own thinking.
Given this first introduction, perhaps we can now see more clearly why
the either/or underlying many popular media reports – especially of
the moral panic variety – works against our best thinking. Ethical
pluralism requires us to think in a “both/and” sort of way, as it
conjoins both shared norms and their diverse interpretations and
applications in different cultures, times, and places. But if the only way
we are able to think about ethical matters is in terms of the either/or
of ethical monism, then we literally cannot conceive of how to move
beyond the right/wrong dualisms with which it often confronts us.
That is, we will find it difficult conceptually to move toward pluralism
and other forms of middle grounds, because our either/or thinking
insists that we can only have either unity (shared norms) or difference
(in interpretation/application), but not both.
Stated differently: in dialogical processes, we emphasize learning to
listen for and accept differences – rather than rejecting them from the
outset because different views must thereby be wrong (ethical
monism). But we also do not come to endorse all possible views as
correct (ethical relativism), because not every view can be understood
as a legitimate interpretation or application of a shared norm. Rather,
dialogical processes help us sort through, on the one hand, which
views may stand as diverse interpretations of shared norms in a
pluralism and, on the other, those views (e.g., endorsing genocide,
racism, violence against women as inferior, etc.) that cannot be
justified as interpretations of shared norms.
Further considerations: Ethical judgments
Another difficulty with the “moral panics” approach to ethical issues in
the new mediascape is that it suggests that “ethics” works like this:
1. There are clear, universally valid norms of right and wrong that we
can take as our ethical starting points – as premises in an ethical
argument.1
2. All that “ethics” really involves is applying these initial premises to
the particulars of the current case in front of us – in a
straightforward deduction that concludes the right thing to do, as
based on our first premises.
3. Once we have our ethical answers in this way, we can be confident
that our answers are right; those who disagree with us must be
wrong.
This approach to ethics is not necessarily mistaken; on the contrary, it
seems that much of the time, most of us in fact do not perceive an
ethical problem or difficulty in the situation we’re facing – because our
ethical frameworks already provide us with reasonably clear and
straightforward answers along just these lines. Most of us, for
example, do not routinely lie, steal, or kill – despite sometimes what
may be considerable temptations to do so – because we accept the
general norms and principles that forbid such acts.
At the same time, however, this initial understanding of ethics
obscures a number of important dimensions of ethical reflection.
To begin with, this initial approach runs counter to what seems
actually to happen when we encounter genuine ethical problems and
puzzles. Take, for example, the problem of downloading music illegally
from the internet. We all know that this is illegal, but we are also
influenced in our thinking by other considerations, e.g.:
I’m not likely to get caught, so there’s virtually no possibility that this
will actually hurt me in some way.
The internationally famous musicians – and the multinational
companies that sell their music as product for profit – are certainly
wealthy enough. They won’t feel the loss of the 2 cents profit they
would otherwise enjoy if I paid for the music.
Copyright laws are unfair in principle: they are written for the
advantage of the big and already wealthy countries. Thus, I think
illegal downloading by a struggling student in a developing country is
a justified form of protest against multinational capitalism and its
exploitation of the poor.
Whatever the law says, the law is the law: I think it should be
respected so far as possible – not only in order to avoid punishment,
but in order thereby to contribute to good social order.
Even if the chances of getting caught are vanishingly small, if I do get
caught, the negative consequences would be enormous (fines, possibly
problems at work, maybe even jail time). It’s not worth breaking the
law to save a few bucks on music.
While internationally famous artists may not miss my contribution to
their royalties, local and/or new artists certainly will. I’ll not rip them
off by illegally copying their music – I’ll just order the song online or
buy the CD instead.
The point here is not only that we are often pulled in competing
directions by values and principles that appear to contradict one
another. In addition, the more fundamental problem is: given the
specific details of our particular situations, how do we know which
principle, value, norm, rule, etc., is in fact relevant to our decision?
That is, in direct contrast to the “top-down” deductive model of ethical
reasoning – i.e., one that moves from given general principles to the
specifics of our particular case – this second ethical experience begins
with the specifics of our particular case, in order then to try to
determine (“bottom-up”) which general principles, values, norms, etc.,
in fact apply.
This second maneuver is thereby far more difficult, as it first requires
us to judge – based on the particulars of our case – which general
principles, norms, values, etc., apply to our case. Clearly, without such
general principles, we cannot make a reasoned decision. But the great
difficulty is this:
there is no general rule/procedure/algorithm for discerning which
values, principles, norms, approaches apply; rather, these must be
discerned and judged to be relevant in the first place, before we can
proceed to any inferences/conclusions about what to do.
Aristotle referred to this kind of judgment as phronēsis – often
translated as “practical judgment.” For Aristotle (and for many ethical
traditions around the world), the development of this sort of practical
judgment – i.e., one that can help us discern in the first place just
which norms and values do apply to the particulars of a specific case –
is an ongoing project that continues throughout one’s entire life. This
is in part because it requires experience – both of successes and of
failures – as these help us learn (oftentimes, the hard way) what
“works” ethically and what doesn’t. The first time we try to learn a new
skill or ability – say, ice-skating – we are certain to stumble and fall,
perhaps catastrophically, and almost certainly more than once.
Analogously, our first efforts to grapple with difficult ethical issues
that require phronēsis do not always go well: we are caught in the
ethical “bootstrapping” problem of needing precisely the ability to
judge that will be robust enough to help only after it has been
developed and honed through many years of (sometimes hard)
experience.
The good news is that – however daunting all of this might seem – the
Aristotelian view (among many others) argues that the vast majority of
us are already ethical beings equipped with phronesis, and thereby the
foundations and abilities for taking on these challenges.
Overview of the book, suggestions for use
By now, readers should have a reasonably good idea of the features of
digital media that lead to specific sorts of ethical issues that we will
explore more fully in subsequent chapters. I also hope that you are
beginning to have a sense that, especially with regard to digital media
that interconnect us globally, it is important to do so in ways that go
beyond the either/or polarities that tend to dominate popular media
reporting.
Chapter arrangement, reading suggestions
The book is organized in a somewhat unusual way, but one that has
proven to be effective and useful. I use a “circle” approach to exploring
and teaching ethics, one that intentionally moves back and forth
between: (a) specific, real-world examples from how we actually use
digital media, and thereby encounter specific ethical problems and (on
a good day) legitimate resolutions; and (b) a number of theories that
often help resolve such ethical challenges and difficulties. This differs
from a more common approach in ethics texts – namely, beginning
with a listing and discussion of important theories, on the sensible
presumption that students can best come to grips with concrete ethical
difficulties only after such a comprehensive introduction to ethical
theories. Instead, I’ve placed ethical theory at the end of the text
(chapter 6). The idea is to encourage students and instructors to take
up just two or three of these theories at the beginning and apply them
to the specific cases explored in the opening chapters. After students
acquire greater facility with how two or three theories work in their
application to real-world cases, they can return with their instructor to
take up additional theories – and then apply these in turn to
additional cases. Placing the theory/meta-theory chapter at the end of
the text thereby gives students and instructors greater flexibility in
determining for themselves just how much theory they wish to absorb
vis-à-vis specific issues and problems. At the same time, it remains
perfectly possible to take the more usual approach, if one wishes, by
starting with chapter 6 and then turning to any one of the specific
cases taken up in chapters 1 through 5.
This circle organization reflects a key discovery in my own teaching
experience. After some years of the more usual “first, all the theories,
then the applications” approach, my students made it clear that they
were more likely to acquire facility with both central ethical theories
and their application if we instead began with just a few theories and
then applied these to specific cases. Whatever the disadvantages of
initially confronting specific examples with a more limited set of
theories, it also often happens that students will thereby discover
precisely through these applications that their initial theories are
somehow inadequate. Specifically, the first theories often do not allow
them to resolve the problems in ways that closely fit their own ethical
intuitions and sensibilities. This is pedagogical gold: students see on
their own the need for further theory/theories, and so, as we return
from specific cases to more theories (making the circle from praxis to
theory), they are characteristically more interested in new theories
than if we had simply worked through all of them from the outset.
By the same token, nothing prevents us from going back to reconsider
earlier cases in light of more recently acquired theories – and thereby
seeing these cases in a new light (making the circle from theory to
praxis). Indeed, doing so often helps us discern new and more
satisfying resolutions of the ethical problems involved. Such
resolutions thereby enhance our appreciation not only for how a
specific theory may offer distinctive advantages vis-à-vis a specific
case, but also for how a now greater range of theories work in their
application to real-world issues and problems.
Instructors and their students who want to follow this approach can
begin with the opening sections of chapter 6 on utilitarianism,
deontology, and ethical relativism, absolutism, and pluralism, and
then move on to chapter 2 (privacy) and, perhaps, chapter 3 (copyright
and intellectual property). Chapter 3 further explores virtue ethics,
Confucian ethics, and the (Southern) African framework of ubuntu:
again, taking up the relevant sections in chapter 6 along with these
components of chapter 3 should be helpful. These elements, along
with feminist ethics and ethics of care from chapter 6 should be
completed prior to chapters 4 (friendship, death online, and
democracy) and 5 (pornography, sexbots, and violence). But some
readers, depending on their interest in the specific topics of each
chapter, may prefer to go to chapter 5 before chapter 4 (or 3, for that
matter), as more concrete and specific in certain ways, before taking
up chapter 4 (or 3).
Case-studies; discussion/reflection/writing/research
questions
Each chapter includes real-world examples intended to elicit initial
reflection; these are accompanied by a series of questions and
suggestions for “reflection/ discussion/writing/research.” These
questions and suggestions can be used by students and classes as
initial catalysts for reflection, discussion, and perhaps informal
writing. Instructors may also find useful suggestions here for
questions and material that they can develop into formal writing and
research assignments more precisely tuned to their own curriculum
and goals. But these are only starters and examples. Instructors and
students will certainly come up with their own preferred questions,
case-studies, etc.
OK – enjoy!
Notes
1 Here I use the terms “premise,” “argument,” “conclusion,” etc., in
their logical sense. An understanding of the basic element of logic is
essential for undertaking ethics – and many ethics texts include an
introduction to logic (e.g.Tavani 2013, ch. 3, etc.). For the sake of
brevity, I have chosen instead to introduce and discuss a minimal
number of logical elements: analogy and questionable analogy in
chapter 3; the distinction between exclusive and inclusive “or”s in
chapter 5; and the basic fallacy of affirming the consequent in
chapter 6. Otherwise, in addition to any preferred resources of
instructors, I would further recommend Weston (2018) as an
excellent introduction to logic.
CHAPTER TWO
Privacy in the (Post-)Digital Era?
Everyone has the right to respect for his private and family life, his
home and his correspondence.
(Council of Europe, The European Convention on Human Rights, Section I, Article 8
[1950])
Under ubuntu [an African worldview emphasizing connectedness and
community welfare over individual welfare] … personal information is
common to the group, and attempts to withhold or sequester personal
information are viewed as abnormal or deviant … ubuntu lacks any
emphasis on individual privacy.
(Burk 2007, 103)
Every teenager wants privacy. Every single last one of them, whether
they tell you or not, wants privacy. – “Waffles”
(boyd and Marwick 2011)
[T]he majority of the communications in our [NSA] databases are not
the communications of targets, they’re the communications of
ordinary people…. They’re the most deep and intense and intimate and
damaging private moments of their lives, and we’re seizing [them]
without any authorisation, without any reason … their cell phone
locations, their purchase records, their private text messages, their
phone calls, the content of those calls in certain circumstances,
transaction histories – and from this we can create a perfect, or nearly
perfect, record of each individual’s activity, and those activities are
increasingly becoming permanent records.
(Edward Snowden in Rusbridger and MacAskill 2014)
Chapter overview
These diverse perspectives on privacy lead us into initial reflections
and then exercises on privacy and anonymity online. Following some
cautions regarding the notion and possible (mis)uses of “culture,” we
explore how different people in different cultures understand and
value “privacy” and private life in different ways. We then examine the
important meta-ethical positions of ethical absolutism, ethical
relativism, and ethical pluralism. These positions shape our responses
to the diversity of cultural views and beliefs regarding privacy –
diversity that must be preserved, in my view, alongside any effort to
establish a global ethics of norms and practices that are shared around
the world.
Information and privacy in the global digital
age
“Privacy” and anonymity online – is there any?
These days, most of us are so accustomed to being tracked one way or
another – including by our smartphones, health-tracking devices,
GPS-equipped digital cameras, etc. – that we may not give any of this a
second thought. Whether for managing our health, finding our way in
a new place via Google maps, or keeping track of our children
(Gabriels 2016) or partners (Danaher, Nyholm, and Earp 2018) –
having our rather precise physical location constantly monitored and
known to at least selected apps makes contemporary life more
convenient, healthier, and safer.
Mostly. Many of these apps can also be used for darker purposes,
starting with various forms of stalking, harassment, and worse. And
there are times and circumstances in which we very much want to
protect our identity and locational privacy. Constant tracking –
ramped up in recent years with the rise of so-called smart assistants
(Alexa, Siri, Google Voice, etc.) – for the sake of selling our data,
feeding us advertisements, or serving up the next shopping or music
recommendations can sometimes be more irritating and creepy, if not
downright scary, than helpful.
Specifically, receiving death threats from those who vehemently
disagree with us politically or ideologically is no laughing matter. Such
attacks – dramatically exemplified in the “#Gamergate” controversies,
beginning in 2014 – have become more extensive and more
prominent, in part as they can be easily organized through any
number of anonymous and pseudo-anonymous communication
venues such as Reddit, 4Chan, and others. In #Gamergate, primarily
women – including “female and minority game developers,
journalists, and critics” (Massanari 2017, 330) – were targeted with
“doxxing,” i.e., collecting and then publishing online personal
information such as home addresses and phone numbers, followed by
ongoing campaigns of rape- and death-threats (Massanari 2017, 333–
4. Such attacks are now increasingly common for politicians and other
public figures, whatever their political or ideological affiliation. Such
events – along with the Snowden revelations – have made it painfully
clear that privacy is increasingly difficult to sustain in an “always on”
digital era.
In order to get a better understanding of how online communication
works – in part, so as to develop a better sense of how such
communication can be protected when needed – let’s play a bit with
good old-fashioned email (still one of the primary and most widely
used applications of the internet). The goal here is to see how much
information your email contains about you that is essentially open and
public – and to reflect on how far you may prefer that some of this
information remain private.
To begin with: most email clients (i.e., the software packages such as
Outlook, [Apple] Mail, and Thunderbird) are set to show users only
the basic contents of an email: sender, recipient, cc’s, subject line, and
email body. But look again: these clients also allow you to review the
complete contents of your email. In current versions of Thunderbird,
for example, after selecting a given message, go to the “View” menu
and then click on the option “Message Source.”
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS
1 Select a recent email from a friend, and then view the complete
source of the email as outlined above. (NB: if you use a webmail
application such as Gmail, you may not be able to see the complete
source in any easy or straightforward way. As we are about to see,
there is good reason for this.)
(A) What strikes you about the information contained here in the lines
before the usual information about sender and receiver addresses?
(B) Notice that each email includes here a history of how it was sent,
usually in a format like this (from a recent email between two of my
accounts):
Received: from mail-mx01.uio.no (129.240.169.59) by mail-
ex03.exprod.uio.no
(129.240.52.6) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via
Frontend
Transport; Fri, 11 Jan 2019 12:28:31 +0100
Received: from aibo.runbox.com ([91.220.196.211])
…
Received: from [10.9.9.210] (helo=mailfront10.runbox.com)
by mailtransmit03.runbox with esmtp (Exim 4.86_2)
(envelope-from
id 1ghuzG-00011o-Kw
for c.m.ess@media.uio.no; Fri, 11 Jan 2019 12:28:30 +0100
In particular, notice the first IP (Internet Protocol) address:
129.240.169.59. What, if anything, does this IP address tell you?
To get a quick idea: copy and paste the address into the relevant field
on the “WhatsMyIP” website: www.whatsmyip.org/ip-geo-location. In
this case, the resulting identification placed me rather precisely in the
university building where my office – and thus the source of this email
– are located.
(NB: if you are not able to find this information easily using your mail
http://aibo.runbox.com
mailto:c.m.ess@runbox.no
mailto:c.m.ess@media.uio.no
http://www.whatsmyip.org/ip-geo-location
client and/or in a specific email – i.e., depending on how your friend’s
email service works – again, we are about to see there may be a very
good reason for this.)
Certainly, most of my students are quite familiar with VPNs (Virtual
Private Networks), primarily as they use these to access streaming and
other services in another country. They are thus somewhat familiar
with the fact that all communication via the internet depends entirely
on IP addresses. But very few of them seem to be aware that their IP
addresses are also included in relatively open ways in their email
(depending, as we have seen, on the mail provider).
And what about your web-browsing?
SECOND REFLECTION/DISCUSSION/WRITING
2 If you use the web browser Firefox, download and install the add-on
“Lightbeam.” (Under the “Tools” menu, select the “Addons” tab, which
should take you to an introductory page that explains what add-ons
are. Look for the link that allows you to “Browse all add-ons”: this will
take you to a second page that includes a search box. Type in
“Lightbeam” and follow the directions from there.) As Lightbeam runs,
it tracks the websites that are tracking you as you navigate through the
Web – and presents its findings in a graph that shows the increasingly
complex set of links to the sites you have visited and the services they
use to record your browsing activities.
After a few days of letting Lightbeam run, have a look at the graph.
And/or: if you are interested in seeing more of the details of how such
tracking works, you can install an add-on such as “NoScript,” “Privacy
Badger” from the Electronic Freedom Foundation
(https://www.eff.org/privacybadger), along with “Ublock Origin”
(Wallen 2018). These add-ons give you control over the mini-
programs or scripts that are required for many of the conveniences a
given website offers, such as search functions – as well as those used
to track your web-browsing. The add-on specifically warns web servers
that you do not want to be tracked, thus giving you the possibility to
“opt in” to such tracking, rather than accede to it unawares and
without explicit consent (the current default in the US– one we will
explore more fully below).
https://www.eff.org/privacybadger
At the same time, however, you will notice that these add-ons will
often “break a page” – that is, render it unusable past initial browsing.
Such experiences thus highlight one of the central conundrums of
living in a post-digital era: we enjoy the conveniences such sites offer
us – are these conveniences (sometimes, indeed, necessities) worth
the trade-off of our personal information?
(C) What strikes you about the resulting patterns and connections that
Lightbeam (and/or NoScript and Privacy Badger) presents to you? Are
there any surprises here?
(D) As we will explore more fully below, “privacy” online is not simply
a matter of protecting our own personal or sensitive information.
Moreover, in an age dominated by our use of social networking sites,
microblogging services such as Twitter, shared video sites such as
YouTube, etc., our internet use and web-browsing also reveal a great
deal about those to whom we are closest. In at least some cultures and
contexts, such as Denmark and Norway, what we want to protect is not
solely the personal or sensitive information of an individual but also
the personal information of those within our “intimate sphere”
(intimsfære) – our close circle of friends and family. In these contexts,
what is important to protect is not simply individual privacy but our
“private life” (privatlivet) as made up by these close relationships.
So, before going much further in thinking about privacy, we need to be
clearer about just what we want protected and/or what we have a right
to have protected. Hence two questions here:
(i) In terms of the information available about you as you send email
and use the Web, just what do you think/feel needs to be kept within
your control? Personal and sensitive information about you – and, if
so, what counts as such information? And/or personal and sensitive
information about those with whom you communicate and interact in
various online environments, beginning with your close circle of
friends and family?
(ii) Given the amount of information you supply – in the form of your
IP address that accompanies your email, and especially the picture of
your web-browsing habits and patterns created as various websites
and services record your IP address along with your specific visits –
how much “privacy” do you appear to have online? More specifically:
given your response to the first question above, do online
environments allow you to control and protect the kinds of
information you think/feel should be protected, e.g., as a right?
(E) Whatever your responses in (D), explain why. That is, what
arguments/evidence/reasons and/or other grounds, including feelings
or intuitions, can you appeal to that would justify your response(s)?
Of course, it is no surprise (at least to most of us) that companies,
governments, and some savvy individuals have access to extensive
databases that record and document individual web-browsing: when
coupled with other databases that record, say, your purchases at your
favorite store (whether online or offline) and increasingly powerful
“data-mining” techniques that cross-correlate this information, such
institutions (and at least some hackers) thus learn an astonishing
amount of detail about you as an individual consumer. And so it is that
Amazon or Facebook, for example, along with thousands of other
corporations, are able to “micro-target” advertising precisely to you
and tailored to meet your apparent interests and needs.
While much of this may be useful to interested shoppers, or at least
benign, it is certainly not without its risks and discomforts. To cite a
now classic case: early in 2012, the US chain store Target sent coupons
for baby cribs and clothes to a high-school girl. Target’s sophisticated
software analysis of the young girl’s shopping activities indicated a
high probability that she was pregnant – but the coupons came as a
less than pleasant surprise to her father who, until the coupons’
arrival, had not been informed of the situation (Duhigg 2012).
All of this makes clear that IP addresses are indeed “sensitive and
personal” (to use the language of EU data privacy protection laws):
this is all the more so in an era of Big Data techniques of aggregating
the thousands and tens of thousands of “digital breadcrumbs” we leave
behind on a daily and weekly basis. But this further helps to highlight
the fact that there are important differences between how diverse
countries and cultures decide to treat IP addresses. US-based Google
has long held that IP addresses are not “personal information”: by
contrast, European Union data commissioners ruled in 2008 that IP
addresses are indeed personal information and thus require data
privacy protection (White 2008). More recently, the EU has drafted
and implemented even more stringent data privacy regulations as
implemented in the GDPR (2016). This is in part as “the maximum
fines [for violation of privacy regulations] are now very high” – i.e., up
to 4 percent of global gross income (Berbers et al. 2018, 23f.). In fact,
Google has now been fined c. US$57 million for violation of these new
regulations (Romm 2019). Stay tuned …
More broadly, we have begun to see that there are indeed strong
cultural influences on how we understand and value “privacy” –
beginning with strong differences between the United States and the
European Union, as these examples make clear. However these
contrasting views may be resolved (if at all), here we can continue by
noting that it is possible to establish and sustain at least some level of
anonymity online: in addition to using security add-ons such as
NoScript, you can explore using “anonymizer” software and webmail
and web-browsing services that hide this sort of information,
beginning with Tor (www.torproject.org/index.html.en). As well, by
using encryption software (e.g., the freely available “Pretty Good
Privacy” software), users can send emails that can be easily deciphered
and read only by recipients who have been given the required
encryption key, thereby assuring themselves a reasonably strong
degree of privacy. But unless users take these unusual – and, in my
experience, not widely familiar – steps, their transmissions across the
internet will thus be more or less open to anyone who cares to look.
Similar comments hold, by the way, for people using their computers
to share information through peer-to-peer (p2p) networks – for
example, in downloading and uploading music and other files through
a network such as BitTorrent, but also in instant messaging exchanges,
including the use of video cameras for video chat and conferencing.
Not to mention doing all of this more and more on your smartphone.
(Tor, for example, is available for Android devices, along with
encrypted messaging services such as WhatsApp, Signal, Telegram,
and others.)
Finally, you can easily check whether your information, beginning
with your email account(s), has been hacked or “pwnd” (as someone
http://www.torproject.org/index.html.en
has taken over its control): https://haveibeenpwned.com. The reality
these days is that, in all likelihood, yes, you have been hacked. And
especially outside the more protected spaces of the EU and
Scandinavia, this means increasing obligation on individuals to take
such measures to protect their information.
REFLECTION/DISCUSSION/WRITING
“PRIVACY”: A MATTER OF CULTURE
As the examples of Google vs. the EU suggest – and as we might expect
– our understandings of privacy vary widely, not simply from
individual to individual but also from culture to culture. The following
exercise is intended to give you and your cohorts an initial set of
indicators of where your sensibilities regarding privacy might lie upon
a continuum of possibilities. It may also help you begin to think about
what you mean by “privacy” as a concept or notion.
Consider the “smart ID” project in Thailand – a project that aimed to
create and issue national identity cards that contain the following
information:
Name
Address
Date of Birth
Religion
Blood Group
Marital Status
Social Security
Health Insurance
Driver’s License
Taxation Data [income bracket, taxes paid/owed]
Health-care Entitlements
Officially Registered Poor Person?
Educational Attainment
https://haveibeenpwned.com
Utilities User Info [how much water/electricity you have used, etc.]
Credit Bureau Info [whether you have defaulted on loans, how much
you owe, etc.]
Log-in Information through Govt’s App Center
Bank Account Number
[The last two bits of information will allow whoever can read the card
to check your bank account as well.]
(Kitiyadisai 2005, 22; Soraj Hongladarom, personal communication, 2019)
REFLECTION/DISCUSSION/WRITING
(A) Where do you draw the line? Beginning with your own responses,
which of the elements of identity would you be comfortable having
encoded on a chip in a national ID card? Which of these elements do
you think/feel/believe should not be included in a national ID card?
(B) Why? In both cases, what arguments/evidence/reasons and/or
other grounds, including feelings or intuitions, can you appeal to that
would justify your response(s)?
(C) You may want to compare your and your cohorts’ sensibilities with
the following, which I’ve observed in using this example with students
and faculty from a variety of cultural backgrounds.
Roughly, reactions/responses range from a minimum to a maximum
amount of information being designated as public or private. These
variations, moreover, appear to correlate with several values and
sensibilities that are known to vary from culture to culture. One of the
most important is suggested in the set of quotations at the beginning
of this chapter – in the contrast between the Council of Europe’s
articulation of what amounts to individual privacy as a human right
vis-à-vis a lack of emphasis on individual privacy in the worldview of
ubuntu, for example. Indeed, we will see that this lack of emphasis on
individual privacy – in part because of a greater emphasis on
community harmony and integration – is characteristic of a wide
range of non-Western cultures and traditions.
And within the domain of Western countries and cultures, there are
further variations in our expectations regarding privacy that correlate
with often very different understandings of the role of the state vis-à-
vis the life of the individual.
So, for example, most US students – if they accept the idea of a
national identity card at all – are moderately comfortable with a card
that would contain name, address, date of birth, and Social Security
number. Perhaps religion. Perhaps blood group (in case of a medical
emergency). Perhaps driver’s license. Perhaps marital status. But it
becomes unclear how much the federal government – or anyone else,
for that matter, besides the person who handles my medical bills –
needs to know about my health insurance. As for taxation data,
including income data – no, thank you! (And, of course, while there
are plenty of poor people in the US, they are not “officially registered”
– nor, I imagine, would anyone be eager to have that registration
included in their identity card.) In Norway, by contrast, everyone’s tax
records are published annually online – in part, I am told, so that
everyone can see that everyone else is contributing their fair share to
the common good.
Danish students and faculty draw the line quickly at religion. This is in
keeping with a strong Danish sensibility – encoded in Danish data
protection laws (and those of the European Union) – that insists on a
(more or less) absolute freedom of belief and viewpoint in matters of
political ideology and religion. But if we are to enjoy such freedom (as
we will explore more fully below), our beliefs and viewpoints must be
protected as personal information.
What about Thai people? Roughly speaking, while there is a strong
opposition among some activists and academics to the national “smart
ID card,” they have been accepted by the majority of the population as
necessary – in part, for example, as such cards, the government has
argued, will help in the fight against domestic terrorism. By the same
token, while there has been some resistance in the People’s Republic of
China regarding the emerging SCS – one that is far more
comprehensive in terms of the information it collects – there is also
broad support for the system as it promises to reduce corruption while
rewarding those who obey the larger social rules: such a system will
contribute to “law-abiding and ethical conduct in Chinese society and
economy” (Kostka 2018, 3).
Finally, if you come from a culture shaped by emphases on community
harmony – as the ubuntu example suggests, you may see no (good)
reason at all for wanting any form of individual privacy.
Overall, then, there emerge these points along a continuum of possible
responses:
Minimal info … Moderate info … Maximum info
(Denmark) (US) (Thailand) (ubuntu)
(Chinese Social Credit System)
Given this continuum and set of points for the sake of specific
national/cultural references, where have you and your cohorts drawn
the line?
So far as you can tell at this point, how might your sensibilities
regarding privacy be connected with the larger national, political, and
cultural environments in which you find yourselves?
Interlude: Can we meaningfully talk about
“culture?”
Q: How do you tell the difference between an introverted Norwegian
and an extroverted Norwegian?
A: The extroverted Norwegian looks at your shoes when he’s talking to
you.
Johnny Søraker told me this joke in 2005, in response to a joke I
passed on from Minnesota: “Did you hear about the Norwegian man
who loved his wife so much he almost told her?” Both jokes trade on
the cultural stereotype of Norwegians as very reserved; both are funny,
in my view – especially if they are told by Norwegians (or their
descendants) as a way of poking fun at their own tendencies and
habits.
These jokes help make a larger point: there are behavior patterns
(beginning with language), norms and values, preferences,
communication styles, and judgments regarding what counts as
beauty, good taste, etc., that are characteristic of one group of people
in contrast with others. Since the nineteenth century, anthropologists
have accustomed us to thinking of these sets in terms of “culture.” So,
in the exercise above, we’ve seen associations between specific
attitudes and beliefs regarding privacy and larger (primarily national)
cultures. “Culture” is a constant thread throughout this volume, but it
is critical to make clear from the outset: (a) how far such references
are useful; and (b) in what ways these uses of “culture” are limited
and, indeed, potentially misleading – even destructive.
To begin with, as I hope these Norwegian jokes suggest, such
generalizations about (national) cultures contain at least some grains
of truth. In this case, that is, it seems safe to say – as a generalization –
that indeed many (if not most) of the people born and raised in
Norway are, in comparison with, say, the average Midwestern
American, more shy and reserved. Such generalizations are useful,
first of all, as starting points for thinking through our differences and
similarities. Indeed, for many (most?) people, our culture (however
difficult it is to define) usually serves as a core component of our
identity, one that demarcates in various ways how we are both alike (in
relation to those who share at least many of the elements of the same
culture) and different (from those shaped by different cultures).
For example, as a Midwesterner, I know that (most of) my US East
Coast friends will speak and walk more quickly than is the norm in
Middle America. These sorts of differences are then the occasion for
our judging – or, as frequently happens, misjudging – one another on
the basis of what is “normal” (in at least a statistical sense) for our own
culture. For example, in many US Midwestern small towns, the norm
is to be “friendly” with cashiers and sales clerks, so as to spend a little
time in conversation during the course of an otherwise commercial
exchange. This friendliness is often (mis)interpreted as time-wasting
superficiality by some of my US East Coast friends. In turn, their
tendency to avoid such small talk often tempts Midwesterners to
(mis)judge them as abrupt, unfriendly, aloof, perhaps arrogant.
And, of course, as we move across national cultures, the differences
become even more striking. So, as the jokes above suggest,
Norwegians tend to be much more reserved, for example, than
Southern Europeans. And so on.
These examples illustrate three critical points to be kept in mind
whenever “culture” appears in this text. First, up to a point at least,
these sorts of generalizations are useful – indeed, at points, essential –
if we are to understand and communicate respectfully with one
another. Simply, the better we understand such cultural differences,
the better we can anticipate how to interpret and communicate
appropriately with those who do not share our own cultural values and
communicative preferences. For example, I am less likely to
misinterpret my East Coast friend’s curt response (curt as compared
with the norm for a Midwesterner) as rude or unfriendly, and more
likely to understand it as intended – that is, as efficient, to the point,
and thereby respectful of our time as a limited and thus valuable
commodity.
More broadly, these differences are interesting and enriching, as they
make us aware of what deeply shapes our individual identities and
group norms, and thereby of the incredible richness and diversity of
human societies. In particular, these generalizations should thus be
helpful to us in coming to understand both ourselves and the multiple
Others around us, as we are both similar and irreducibly different in
critical ways. Doing so, finally, is necessary if we are to overcome the
twin dangers of ethnocentrism (assuming our own ways of doing
things are universal), and then judging Others as inferior because their
ways are different from our own. Human history is too full of the sorts
of warfare, colonization, enslavement, and imperialism that follow
upon such ethnocentrism. As Ames and Rosemont put it: “the only
thing more dangerous than making cultural generalizations is the
reductionism that results from not doing so” (1998, 20). That is, as
risky, difficult, and inevitably incomplete as an attempt to characterize
culture may be, it seems a necessary exercise if we are to avoid
assuming that all others must be like us, and that they are less than
fully human if they are not.
But, second, when we use such generalizations, we obviously risk
turning them into simple and unfair stereotypes that can foster unjust
prejudices. Please remember: every generalization, most especially the
generalizations that we think may help characterize a given “culture,”
by definition entails multiple exceptions to the general rule. In
statistical terms, there are always “outliers” – those people who stand
outside the statistical norm as defined by the standard bell curve. So:
many Midwesterners may seem friendly, open, and extroverted as
compared with many Norwegians – but, of course, there are more than
a few introverted Midwesterners as well as extroverted Norwegians
who simply confound the generalization. In other words, we must
never mistake a generalization for anything other than a
generalization or heuristic, an initial and provisional guideline for first
interpretations – not, for example, some sort of universal category that
somehow captures an eternal and immutable essence of Midwestern-
ness, Norwegian-ness, etc. (cf. Rohner 1984).
Third – however far such generalizations may capture elements true of
many, but not all, people shaped by a given culture – we must further
keep in mind that, for every individual who may share such national
characteristics, she or he is further shaped by a very complex range of
additional differences and variations both within and beyond national
categories. Folk in Eastern Oklahoma are clearly distinct from folk in
Western Oklahoma, just as people in Aarhus (Denmark) have distinct
(and not always positive) impressions of how Copenhageners, while
clearly Danes, at the same time differ from them (and vice versa, of
course). Immigrant communities are distinct in multiple ways, while
simultaneously including people seeking either to assimilate to or
hybridize with the larger national culture. Indeed, in any given city, a
specific neighborhood features a specific set of cultures or subcultures
as affiliated with age, ethnicity, and class. And then, of course, gender
– generally – makes a difference as well. Oh yes: all of these change
over time, of course – some elements more quickly than others –
complicating the picture still further.
All of this means, again, that any generalizations we make about a
culture can be taken only as starting points – as heuristics open to
change, not static concepts. That is, while potentially useful for our
initial reflections and encounters with one another, further
exploration almost always leads us to more complex and nuanced
understandings. As a result, we will almost always modify and perhaps
reject altogether elements of these starting points. In fact, we are
about to see an example of this sort of modification shortly, in Soraj
Hongladarom’s account of Buddhist understandings of the person and
privacy – an account that will nicely complicate the basic differences
between Thai and US culture that we have started with here (see pp.
45–9). At the same time, however, some notion of culture remains
useful when handled with care, beginning with current work in
intercultural communication (e.g., Cheong et al. 2012; Vignoles et al.
2016).
By keeping these comments and caveats in mind, I hope that readers
will never be tempted to mistake what I intend as an initial, dynamic,
always incomplete, and exception-laden generalization and heuristic
for a stereotype.
“Privacy” in the global metropolis: Initial
considerations
In the developed world, we increasingly are the digital information
that facilitates our lives and engagements with one another. Luciano
Floridi made this point most strongly early on: a person is her or his
information.
“My” in “my information” is not the same as “my” in “my car” but
rather the same as “my” as in “my body” or “my feelings”; it expresses
a sense of constitutive belonging, not of external ownership, a sense in
which my body, my feelings, and my information are part of me but
are not my (legal) possessions.
(2005, 195)
In some ways, this claim may seem too strong. But there is no question
that, as more and more of our entertainment and communication take
place via digital media, and as more and more of our lives are captured
in digital form, our “digital footprint” – basically, everything you do
online, from social media use to searching and exploring websites,
shopping and banking, etc. – expands dramatically.
But is all of this information primarily our property in the sense of an
external, legal possession – and/or is Floridi correct to suggest that at
least some elements of “our” information are who we are, in the same
way as we think of ourselves in terms of our own bodies and feelings,
for example?
Floridi’s claim becomes all the more persuasive when we consider how
much of our lives in the developed world – beginning with, but by no
means limited to, important governmental identity information (e.g.,
Social Security numbers in the US, CPR numbers in Denmark,
Fødselsnummer in Norway, etc.), bank and credit card accounts (e.g.,
the RIB number in France, IBAN and SWIFT numbers, etc.), and so
forth – is digitized, processed, and transmitted electronically. Couple
this with the metaphor introduced by James Moor (chapter 1): our
information is “greased” – it is (almost) as easily copied and
transmitted to those whom we may not want to see it as to those whom
we do want to see it. As the aptly named phenomenon of identity theft
suggests, losing these sorts of information about ourselves – what we
think of as private information – to others may well feel and result in
harms more like a direct assault on our own bodies and feelings,
rather than solely the theft of external property.
To use another example: simply ask your neighbor if you can have
access to his or her mobile or smartphone – that is, to the text
messages, contact list, phone numbers, perhaps emails, documents,
etc., that are stored there. Especially if you ask this of a relative
stranger, it seems likely that he or she will refuse: you’re asking for
information that is private – information that increasingly defines our
sense of identity in a digital age.
You don’t have to be paranoid – but it helps …
Whatever our individual ethical assessments of and responses to these
situations may be, many threats to privacy are well known. Most of us
know, for example, to be careful with passwords to important
accounts, with PINs for debit and credit cards, and so forth. Indeed,
after any number of increasingly spectacular hacks – e.g., of some 500
million customer records, including passport and credit card
information (Perlroth, Tsan, and Satariano 2018) – most of us know
that both commercial and governmental databases containing our
personal information are increasingly vulnerable targets. Once a
database is broken into, others are then able to use this information
about us – enacting what is rightly called identity theft not only to take
money from our bank accounts and charge purchases to our credit
cards, but also thereby, in some cases, to jeopardize our own claims to
our own identity.
In addition, we are constantly vulnerable when we may think that we
are safest – that is, sitting in front of our computer, tablet, or
smartphone, sending information via email, engaging in web-browsing
(perhaps for shopping), perhaps doing banking transactions. We face a
growing barrage of Trojan horses, worms, and viruses that can, for
example, capture and then transmit critical banking information to a
third party – or, more ominously, lock up our devices until we pay a
significant fee to the unknown attacker (so-called ransomware).
Hacking opportunities have also exponentially increased with the
diffusion of wifi networks – including home routers that can be
exploited in various ways (Khandelwal 2018). And then there’s the
emerging Internet of Things and so-called Smart Cities, which rely on
the diffusion of small, inexpensive sensors and devices (e.g., home
electrical meters) – all of which are thereby hackable with comparative
ease (Berbers et al. 2018, 36–9).
To be sure, in what amounts to an ongoing arms race, improved
security software and expanded privacy protections are likewise
developed and made available. Password managers that assign distinct
and hard-to-guess passwords to specific accounts are increasingly
common (if not essential); by the same token, two-step verification –
confirming a log-in attempt by way of a security code sent to a second
device – is more or less the default these days. But, despite these
advances, we remain vulnerable – in part because of our own attitudes
and practices. The so-called privacy paradox demonstrates that we are
often our own worst enemy in these matters. Very simply: most of us
say we’re concerned about privacy – but when given possibilities of
protecting our privacy at even modest cost or inconvenience, most of
us prefer not to (e.g., Hargittai and Marwick 2016).
If you’re not paranoid yet … terrorism and state
surveillance
Many of us are further aware that, beyond criminals and hackers, as
citizens we face additional threats to our privacy – for example, from
corporations that collect data on individual purchasing choices
(usually by consent in exchange for modest discounts or other
economic incentives). Especially in light of corporations such as Apple,
Google, Microsoft, and (even) Facebook going to ever greater lengths
to protect consumer privacy (e.g., Apple’s refusal to help the US FBI
hack into an alleged terrorist’s iPhone – Holpuch 2016), governments
may be the worst culprits. On the one hand, the modern liberal state
exists to protect basic rights – including rights to privacy; but, to
protect our rights – especially so-called positive or entitlement rights,
e.g. to education, health care, disability assistance, family benefits
such as child support, maternity and paternity leave, and pension
payments, etc. – governments clearly require a great deal of personal
information about us. How governments ought to and actually do
protect that information from illicit and potentially devastating use
against their own citizens varies widely from country to country.
Somewhat more darkly, especially following the September 11, 2001,
attacks in the United States, governments throughout the world justify
ever greater surveillance of their own (and other) citizens in the name
of fighting terrorism. And so, especially as Edward Snowden made
crystal clear, unknown (because secret) quantities of personal
information – as transmitted through emails, phone calls, etc. – are
collected and scrutinized for potential threats. By the same token,
surveillance of citizens through security cameras – distributed ever
more densely throughout the world – continues to expand. Such
surveillance – e.g., as it identifies you while jaywalking – appears to
play a role in Western “predictive policing” programs (Burgess et al.
2018, 29; Hintz, Dencik, and Wahl-Jorgensen 2018, 55f.) as well as in
the emerging Chinese SCS.
The SCS is being cobbled together from several dozens of smaller
versions – both public and private, including currently private credit-
rating companies such as Sesame (Kostka 2018, 2-3). Like credit-
rating systems in the West, data are collected on income, debt,
purchasing patterns – but also matters such as being a parent (a plus)
or playing video games (a minus). Other versions track deductions
such as whether or not you have paid fines, misbehaved on a train,
stood up in a taxi, cheated in videogames, jaywalked, run a red light,
or failed to show up for a restaurant reservation. These deductions can
be countered by good behavior, such as donating blood, contributing
to a charity, or doing a certain number of hours of volunteer work (Xu
and Xiao 2018; Kobie 2019).
A sufficiently low score can get you blacklisted – as some 7 million
people have already experienced. Once blacklisted, you can further be
prohibited from applying for a loan, buying property, buying plane
tickets, and “banned from travelling some train lines” (Kobie 2019).
Alternatively, a sufficiently high score will “redlist” you, giving you
easier access to governmental services and tax reductions, for example
(Kostka 2018, 3).
The Chinese government argues that the intention is to stamp out
corruption and reward socially beneficial behavior, and so build trust
within the larger society. But Western researchers and observers
counter that the aim of such positive and negative reinforcement is “to
create a citizenry that continually engages in automatic self-
monitoring and adjustment of its behavior” (Hoffman 2017, cited in
Kotska 2018, 3). Anyone familiar with Foucault’s famous account of
the Panopticon – and/or the Black Mirror episode “Nosedive” – will
find all of this chillingly familiar. The upshot will be that “the
Communist Party will possess a powerful means of quelling dissent,
one that is comparatively low-cost and which does not require the
overt (and unpopular) use of coercion by the state” (Kostka 2018, 3)
At the same time, all of this again illustrates crucial differences in
cultural assumptions about the self and privacy. In keeping with
suspicions of – if not hostility toward – what modern Westerners
presume about individual privacy as a positive good, Genia Kostka
reports high levels of public approval of these systems in China (2018).
Lastly: the SCS is especially significant not only for the citizens of
China, but potentially for the rest of us. Most starkly, according to
Freedom House, the past six years have marked the rise of “Digital
Authoritarianism” as diverse regimes have expanded various forms of
online censorship and surveillance – in part by way of copying the
Chinese model (Shahbaz 2018).
“Privacy” and private life: Changing attitudes
in the age of social media and mobile devices
These manifest threats to personal privacy and private life are further
accompanied by changing attitudes toward privacy in both “Western”
and “Eastern” societies – perhaps as an artifact of our growing use of
digital media (Ess 2010). For example, in sharp tension with worries
about hierarchical forms of surveillance by states and corporations,
the terms first introduced by Albrechtslund (2008) – “voluntary” and
“participatory surveillance” – are now commonplace understandings
of our always-on behaviors. Lateral surveillance of one another is
apparent on any given social media site (Facebook, Instagram,
Twitter, etc.) as well as on video sharing sites such as YouTube. Still
more recently, our efforts to sustain some version of privacy are
increasingly shared: we want to share what was once seen as primarily
individually private information – but now within specific groups.
“Group privacy” or “collective privacy” are terms that further refine
and describe these changing privacy sensibilities – especially in an era
increasingly dominated by “Big Data” approaches (Taylor, Floridi, and
van der Sloot, 2017).
Similarly, the mobile phone and then tablets invert our earlier
contexts as well: being “on the grid” is now the norm – and so these
devices have turned traditional notions of “public” and “private”
upside down. Earlier, privacy in the form of being “off the grid” of a
public communications network was commonplace. And, especially
for the sorts of philosophical and political reasons we will explore
more fully below, the capacity to be incommunicado was seen to be
essential to being human. First of all, such privacy makes possible the
sort of space and time needed for the development of an autonomous
self, one capable of reflecting on and carefully choosing among the
multiple acts and values available to human beings, both in solitude
and in community with others. As Virginia Woolf (1929) famously
advised, women seeking their own self-development, creativity, and
freedom need “a room of one’s own.” In this way, privacy is an
essential condition for our creating our very selves. (We will see a
clear example of this below in how the right to privacy is justified in
the German constitution [Grundgesetz] in part as it protects our
further “right to personality” [Persönlichkeitsrecht] – Whitman 2004,
1180ff.). Such autonomy, moreover, is not only a necessary condition
for our being suited to living and acting in a democratic society; most
fundamentally, as modern political theory emphasizes, only such
autonomous selves can justify the existence of democratic societies.
But in the contemporary world, mobile phones, tablets, and other
GPS-equipped devices have made publicity our default setting. Again,
being “always on” inverts or turns upside down earlier understandings
of who we are and of our relationships with others. We will now see
how this inversion and transformation means, first of all, that we have
to rethink our conceptions of privacy in dramatic ways. It may also
well mean that we will likewise need to revisit and perhaps revise our
earlier ethical and political philosophies. All of this is because, most
fundamentally, these developments are profoundly reshaping our
most basic assumptions about human selfhood and identity.
“Privacy” and private life: Cultural and
philosophical considerations
As “Waffles” reminds us at the opening of this chapter, individual
privacy remains a core concern of contemporary teenagers – whatever
their worried parents may think. Indeed, younger people have led the
way in abandoning Facebook altogether in favor of more closed social
media sites – including Snapchat and Instagram whose default is the
erasure of postings within 24 hours. While the generations may
disagree on the nature and limits of privacy, the key question is: what
do we mean by “privacy?”
In the US context and tradition, the conception of privacy begins with
primarily physical notions: as the examples of bedroom and bathroom
privacy exemplify, what we initially wanted to protect was the privacy
of spaces, first of all our homes (Whitman 2004, 1161). That is,
“privacy” does not appear as a basic right in the US Constitution or Bill
of Rights: rather, it emerges only gradually, beginning with the
seminal paper by Samuel Warren and Louis Brandeis in 1890. As
Bernhard Debatin points out, the concept is rooted in Fourth
Amendment protections against “unreasonable search and seizure” of
private property (2011, 49).
Such a conception may have worked well in the days before electric
media, such as telegraph, radio, and then the internet. But with the
advent of phone calls or radio transmissions that could be intercepted
and recorded unbeknownst to their primary senders and receivers, it
gradually becomes clear that “privacy” is not simply a matter of
protecting specific spaces. Rather, as we have seen, Luciano Floridi
makes clear that what we want protected in an information age is
precisely our information – information that, in digital form, is that
much easier to access, copy, and distribute. But why would we worry?
As Judith DeCew points out: “The expectation of privacy is grounded
in the fear concerning how the information might be used or
appropriated to pressure or embarrass one, to damage one’s credibility
or economic status, and so on” (1997, 75). In short, we are afraid that
someone and/or some organization will be able to use information
about us to harm us – and/or those close to us – in some way.
We will explore the legal and philosophical dimensions of privacy
more fully below. Here we’ll continue with the cultural aspects,
remaining with “Western” societies. To begin with, in German,
Norwegian, and Danish, for example, there are certainly counterparts
to the English term “privacy” – namely, Privatheit (German) and
privathet (Norwegian, Danish). But, especially in Denmark and
Norway, “privacy” discussions focus much more on privatlivet –
“private life.” Such private life encompasses not simply the interests
and pursuits of a solitary individual: in addition, privatlivet is
understood to involve one’s intimsfære, an “intimate sphere” of close
friendships and relationships. These concepts thus do not map neatly
onto earlier US notions of privacy as primarily an individual private
space. In contrast to such a static or substantive conception, these
notions of privatlivet and the intimsfære bring to the foreground our
close relationships – relationships that are ongoing, evolving, and in
some important ways negotiated over time. (A once distant stranger
becomes accepted as an important member of one’s intimsfære; a
parent or sibling or child or close relative may suddenly pass away; a
long-time friend may move to a distant country, making it difficult to
sustain a sense of closeness and intimacy.) In these cultural contexts,
then, it is not simply important to protect “privacy” for the individual
– specifically, to find ways to ensure that personal or sensitive
information about the individual is not taken up by those who would
use it to harm that individual. At the same time, what we want to
protect from harm includes these close relationships and the private
life they constitute. So it is, then, that among the guidelines issued in
Norway by the National Committee for Research Ethics in the Social
Sciences and the Humanities is “The obligation to respect individuals’
privacy and close relationships” – i.e., not simply an individual’s
privacy (NESH 2006, 17).1 We will further see that these more
relational understandings of “privacy” not only cohere with emerging
practices of group and collective privacy: in addition, they are
articulated in the increasingly influential account of privacy as
“contextual integrity” as developed by Helen Nissenbaum (2010).
As the example of ubuntu suggests, when we turn to what we once
thought of as “non-Western” cultures and traditions, what counts as
even a rough approximation of “privacy” becomes still more
complicated. As we will explore in the next section, in cultures shaped
by Buddhist and Confucian conceptions, the stress is on the self as a
relational self – i.e., a sense of identity that is more or less fully
constituted precisely by the extensive relationships that define us as
members of families and larger communities. To use the example of a
once classic form of Chinese introduction: such an introduction would
recount my primary relationships, beginning with my parents,
siblings, aunts and uncles, and (perhaps) children. This sense of
selfhood thereby stresses the importance of sustaining harmonious
relationships with the family and larger community, and includes an
exquisitely developed attention to the moods and wishes of others. The
Japanese version of such attention, wakimae or “situated
discernment,” has been described by one of my Japanese students as
the need to “read the atmosphere” or even to “read the minds” of those
around one, with the goal of attuning one’s behavior so as to avoid
conflict or disharmony (cf. Hildebrandt 2015, 117–21). In this context,
some notion of individual “privacy” – a desire to hold something of
oneself apart from the group – can be seen only in negative terms. As
in the Western Middle Ages – that is, before the rise of modern
conceptions of individuals as rational autonomies who thereby require
privacy – the notion seems to be rather: “the only reason you would
want privacy is if you have something bad (or illegal) to hide.”
At the same time, however, the shifts we are starting to see in
“Western” societies toward the “publicly private / privately public”
(Lange 2007) and “group privacy” (Taylor, Floridi, and van der Sloot,
2017) suggest a correlative shift in our underlying assumptions
regarding selfhood and identity – namely, from an earlier emphasis on
strongly individual notions of selfhood toward a greater emphasis on
more relational notions of selfhood. In this sense, (recent)
“Westerners” are becoming more like (older) “Easterners.” At the
same time, we will see more fully below that (recent) “Easterners” are
likewise shifting – from a greater emphasis on relational selfhood to a
greater emphasis on more individual selfhood: this shift is apparent
first of all in the changing demands in “Eastern” cultures for (older)
“Western” notions of individual privacy as a positive good. Finally, we
will explore an important middle ground between these two – namely,
conceptions of the self as a relational autonomy, that is, a self that
conjoins more individual notions of the self as a freedom or autonomy
alongside the relationality that increasingly defines “Westerners,”
especially as taken up within social media.
For the moment, however, we see another continuum emerging here:
(Strongly) individual conception of self → relational autonomy →
(strongly) relational conception of self
Individual privacy → Group privacy →“Publicy” (no privacy)
US (protection of spaces) . . . Norway (protection of privatlivet) . . .
Confucian, Buddhist societies
REFLECTION/DISCUSSION/WRITING
(A) In light of the above discussion, what are your intuitions
regarding:
your sense of selfhood or identity, and
your sense of privacy (and/or private life) – that is, what kind(s) of
privacy (if any) and/or private life (if any) do you feel/think requires
protection?
(B) Are your intuitions consistent with your
historical/linguistic/communicative backgrounds – i.e., as shaped
primarily by the “culture(s)” of a specific nation-state such as the US,
Scandinavia, “Eastern” societies, etc.?
(C) Whatever account of selfhood and privacy you offer, can you
further provide arguments, evidence, and/or some other forms of
support that would somehow justify these conceptions?
(As we will see, these conceptions further correlate with our
assumptions regarding the most appropriate or desirable forms of
social structures and governance – broadly, a continuum that
emphasizes equality and democracy vis-à-vis hierarchy and more
authoritarian regimes. Our preferences in these domains may provide
us with an important set of arguments for the kinds of selves and
privacies / private lives we think are justified.)
“Privacy” and private life: First justifications,
more cultural differences – transformations
and (over-?)convergence
As we have seen, strongly individual notions of “privacy” have
emerged in the modern West as one of the basic rights of individuals.
But justifications for this right vary. As Deborah Johnson (2001) has
pointed out, in the United States privacy is seen as an intrinsic good
(something we take to be valuable in and of itself) and as an extrinsic
good – something valuable as a means for another (intrinsic or
extrinsic) good.2 In particular: we need privacy to become
autonomous selves. That is, we need privacy to cultivate and practice
our abilities to reflect and discern our own ethical and political beliefs,
for example, and how we might enact these in our daily lives. Privacy
is thus a means for the autonomous self to develop its own sense of
distinctive identity and autonomy, along with other important goods
such as relationships. Only through privacy, then, can the autonomous
self develop that has the capacity to engage in debate and the other
practices of a democratic society (Johnson 2001, ch. 3). In Germany,
rights to privacy are likewise considered as a basic right of an
autonomous person qua citizen in a democratic society. Privacy is also
seen as an instrumental good – primarily as it serves to protect
autonomy, the freedom to express one’s opinion, the “right of
personality” (Persönlichkeitsrecht), and the freedom to express one’s
will (Whitman 2004, 1180ff.).
By contrast, “privacy” in many Asian cultures and countries has
traditionally been understood first of all as a collective rather than an
individual privacy – for example, the privacy of the family vis-à-vis the
larger society (Kitiyadisai 2005). Insofar as something resembling
individual privacy was considered, such privacy was looked upon in
primarily negative ways. For example, Japan’s Pure Land (Jodo-
shinsyu) Buddhist tradition emphasizes the notion of Musi, “no-self,”
as crucial to the Buddhist project of achieving enlightenment –
precisely in the form of the dissolution of the “self,” understood in
Buddhism to be not simply an illusion, but a most pernicious one. As
the elemental “Fourfold Truths” of Buddhism put it, our discontent or
unhappiness as human beings can be traced to desire that can never be
fulfilled (because either we will never obtain those objects or, if we do,
we will lose them again, especially as time and death take them from
us). But such desire, in turn, is generated by the self or ego. Hence, to
eliminate the unhappiness of unfulfilled/unfulfillable desire, all we
need do is eliminate the ego or self. The Buddhist goal of nirvana, or
the “blown-out self,” thus justifies the practice of what from a modern
Western perspective amounts to intentionally violating one’s
“privacy”: in order to purify and thus eliminate one’s “private mind” –
thereby achieving Musi, “no-self” – one should voluntarily share one’s
most intimate and shameful secrets (Nakada and Tamura 2005).
Similarly negative attitudes toward individual privacy have marked
China for most of its history – in part because of the Confucian
emphasis on the good of the larger community (see the discussion of
Confucian ethics in chapter 6). Hence, until only relatively recently,
the Chinese term correlating with individual “privacy” (Yinsi) held
only negative connotations – that is, of a “shameful secret” or “hidden,
bad things” (Lü 2005). Finally, a similar emphasis on community is
apparent in many indigenous traditions. So ubuntu, as we saw Dan
Burk characterize it at the beginning of this chapter, understands
personal identity as “dependent upon and defined by the community”
– in part, as we will see in more detail in chapter 6, as this African
tradition shares with Confucian thought an understanding of the
individual as a relational being: as defined by the multiple
relationships with others in the larger community. In this light, it
makes sense that:
Within the group or community, personal information is common to
the group, and attempts to withhold or sequester personal information
are viewed as abnormal or deviant. While the boundary between
groups may be less permeable to information transfer, ubuntu lacks
any emphasis on individual privacy.
(Burk 2007, 103)
But, as we have seen, these understandings of privacy are undergoing
dramatic changes. This is in part because globalization, as itself driven
by the rapid diffusion of digital media, often thereby increases our
awareness of and interactions with one another cross-culturally. This
in turn leads to a hybridization of diverse cultural values and practices.
In particular, as young people in Asia enjoy a growing material wealth
and thereby a growing physical personal space (i.e., their own room in
a family dwelling – something more or less nonexistent a few decades
ago), and as they are ever more aware, thanks to global media, of
Western notions and practices regarding individual privacy, they
increasingly insist on personal and individual privacy in ways that are
baffling (at best) and frustrating (at worst) to their parents and their
parents’ generation (Hansen and Svarverud 2010; Yan 2010).
These shifts can be seen most dramatically in terms of the laws
surrounding privacy – indeed, following these changing
understandings of selfhood, and thus what counts as “privacy” in both
“Western” and “Eastern” countries, privacy laws have changed so
much over the past decade or so that the two cultures move, in effect,
ever closer to one another. To see how this is so, we take a brief look
first at the European Union and then at the United States.
As we have seen, the European Union has encoded in law since 1995
very strong personal data privacy protections (European Union 1995;
GDPR 2016). The EU Data Privacy Regulations define what counts as
personal and sensitive information: “personal data revealing racial or
ethnic origin, political opinions, religious or philosophical beliefs;
trade-union membership; genetic data, biometric data processed
solely to identify a human being; [and] health-related data.”3 The
GDPR further requires that individuals be notified when such
information is collected about them; individuals then have the right to
review and, if necessary, correct information held about them. As Dan
Burk (2007, 98) emphasizes, individuals have the right to consent –
they must agree to the collection and processing of their personal
information. And, as we have seen, recent legislation makes these
rights to consent – to “opt in” to, for example, data collection as you
browse a website – even stronger. Finally, the Regulations insist that
the transfer of personal information to third parties outside the EU
can occur only if the recipient countries provide the same level of
privacy protection as that encoded in the EU directives. As Burk
further explains, this last requirement has meant that the EU
approach to privacy began to spread more quickly around the world
than its US counterpart (ibid., 100f.). As we will see, this requirement
made especially dramatic impacts in Asia – but only to be
overshadowed by more recent developments.
Finally – in ethical terms that may now be familiar to you from
chapter 6 – Burk characterizes the EU approach as strongly
deontological: it rests upon a conviction that privacy is an inalienable
right – one that states must protect, even if at considerable economic
and other sorts of costs. In particular, as we will explore more fully
below, privacy is essential to democratic processes: to compromise
privacy for any reason is thereby to compromise democracy itself.
In the United States, by contrast, data privacy protection is something
of a patchwork. In general, national or federal regulations address
privacy issues with regard to health matters (e.g., the Health
Insurance Portability and Accountability Act, 1996
[www.hhs.gov/hipaa/index.html]) and some financial information
(e.g., banking and credit information), leaving the rest to individual
states and/or businesses to work out (the latter through so-called
aspirational models of good practice – see Burk 2007, 97; Debatin
2011, 49). The default setting here is the exact opposite of the EU
model: rather than asking individuals to “opt in” to having their
information collected, processed, and distributed in specific ways, the
US approach requires individuals to “opt out” if they have reservations
about how information about them is being collected and possibly
used (Burk 2007, 97). So it is, then, that, if you are sitting in the US
and would like the “opt-in” approach more characteristic of the EU
codes, you’ll need to install the sorts of security software discussed
above.
Burk further observes that this “business-friendly” attitude is in part
the result of a utilitarian approach to the issues of data privacy
protection. Simply put, the US preference is for minimal governmental
involvement and maximum freedom for businesses: the hope is to
minimize the economic – and other – costs of implementing and
enforcing more rigorous data privacy protections, such as those of the
European Union, and thereby maximize business efficiencies and
profitability. Presumably, doing so will lead to the utilitarian goal of
realizing the greatest good for the greatest number – at least in terms
of economic gains and benefits (Burk 2007, 98f.). According to James
Whitman, this US approach is further rooted in nineteenth-century
“emphasis on consumer sovereignty” (2004, 1182), coupled with the
late nineteenth-century enthusiasm for laissez-faire market ideologies
(2004, 1208).
As a last piece in the US patchwork: in the absence of any further
developments in US privacy law, California has moved forward to
develop more EU-style approaches to privacy (Wakabayashi 2018).4
Finally, we take up Asia – meaning, here, the People’s Republic of
China (PRC) and surrounding countries, including the two special
administrative regions of Hong Kong and Macao. As we would expect
in light of the greater emphases in these societies on a relational self,
the greater priority of community harmony, and hence traditional
attitudes toward individual “privacy” as only something hidden or bad
http://www.hhs.gov/hipaa/index.html
(see above, p. 64), legal definitions of and protections for individual
privacy rights have emerged only relatively recently. In Hong Kong, for
example, individual privacy protections were first introduced as a
means necessary to the development of e-commerce (Tang 2002) –
that is, not, as in earlier Western justifications, for the sake of
individual autonomy, etc. But the Supreme Court of the PRC
established individual privacy rights as “attached” to “reputation
right” – that is, the right to have one’s reputation protected from
slander or defamation. Privacy violations that lead to serious damage
to reputation are thus considered a tort, a personal injury for which
the agent can be sued for damages in a civil court. By 2001, the
Supreme Court established privacy as its own independent right,
justified in part by the view that a violation of individual privacy
amounted to a “spiritual harm.” By 2010, new tort liability law was
enacted that established privacy as a right among other civil rights (Sui
2011). To be sure, critical caveats must be made here regarding the
crucial difference between a law on the books and its enforcement in
society. Moreover, the emerging Chinese SCS seems to throw all of
these privacy protections into very serious question indeed. However
the SCS turns out, these shifts nonetheless represent remarkable
transformations over a relatively short time (cf. Greenleaf 2011).
In sum, while “Westerners” thus head in what was a more “Easterly”
direction in terms of selfhood, privacy, and law, at least some
“Easterners” such as Japan, if not the PRC, appear to be heading in
what was a more “Westerly” direction in those same terms. The
resulting pattern thus suggests, if not a convergence, then at least a
closer resonance between basic conceptions of selfhood (as both
individual and relational), privacy (as individual but also group), and,
perhaps, the laws defining privacy and its protections. At the same
time, however, the emergence of ever more stringent individual
privacy protections in the EU GDPR (2016) and the apparent erasure
of all such protections in the emerging Chinese SCS make clear that
fundamental, and perhaps irreducible, differences will remain.
“Privacy” and private life: Cultural differences
and ethical pluralism
Whatever the long-term influence these important resonances may
have, the striking differences between – especially – the EU and China
on matters of privacy force us to confront the obvious ethical question:
who’s right?
We first explore this question – and primary ethical responses to it –
by way of an example provided by Soraj Hongladarom, a Thai
Buddhist philosopher. Hongladarom (2007) points out that, while
earlier cross-cultural discussions of privacy tended to emphasize these
sorts of contrasts, there are also important similarities between, say,
Western and Buddhist views. First, Buddhism must emphasize at least
a relative role and place for the individual: while, from an ultimate or
enlightened standpoint, the individual is a pernicious illusion, the
individual remains squarely responsible for his or her realization of
enlightenment. For its part, Western thought – both in premodern
traditions such as that of Aristotle and in modern philosophical
streams such as that of Hegel – includes emphasis on the community,
not simply the individual. From this perspective, Hongladarom has
argued for a Thai conception of individual privacy – one that
ultimately disagrees with Western assumptions regarding the
individual as an absolute reality, but nonetheless retains a sufficiently
strong role and place for the individual. Such a Buddhist individual,
again, is the agent of its own enlightenment, but also serves as a
citizen of a struggling democratic state in Thailand. In this way,
Hongladarom argues, there are strong philosophical grounds for
granting such an individual privacy rights similar to those enjoyed by
Westerners – even if, by comparison, these rights will be more limited
in light of the greater role of the state and greater importance (on both
Buddhist and Confucian grounds) of the community.
In ethical terms, Hongladarom hereby articulates for us an important
ethical pluralism regarding the nature of privacy. Such a pluralism, as
we will explore more fully in chapter 6, stands in the middle ground
between ethical relativism and ethical absolutism. Most briefly, the
important point is that, in such pluralism, it is possible to hold
together both shared norms and values (in this case, privacy) while
these norms and values are understood, interpreted, and/or applied in
diverse ways – that is, in ways that reflect the distinctive values and
norms of diverse cultures. In this way, pluralism allows for a shared
global ethics, on the one hand, while avoiding, on the other hand, a
kind of homogenizing ethics that ignores or obliterates all important
cultural differences. And so, ethical pluralism provides the possibility
of a global ethics made up of shared norms and values while
preserving the essential differences that define diverse cultural
identities.
In the case of “privacy,” these cross-cultural comparisons can thus be
understood to constitute an example of such an ethical pluralism. This
is to say: US-style conceptions of “privacy” as strongly individual and
(earlier) Thai notions of “privacy” as primarily familial privacy thus
present us with strongly different ideas of “privacy.” For the ethical
relativist, these differences would be one more example arguing that
there are no universal values or norms: the validity of ethical norms
and values is solely relative to a given culture and time. In this
instance, the US notion of individual privacy as a positive good is
legitimate – but only if you’re a member of the US culture; and the
(earlier) Thai emphasis on familial privacy alone as ethically
acceptable is also legitimate if you were born and raised in Thai
culture in those days. This is fine – at least as long as people from both
cultures have nothing to do with one another and thus require no
shared norms or values.
For the ethical monist, we can establish a shared norm or value rather
simply: one of these values or norms must be true – absolutely, finally,
and universally – and hence any different value or norm can only be
false. Again, if we had nothing to do with one another, this might be a
workable solution: but in today’s world, it is more or less impossible to
live in such splendid isolation. And so, if the ethical monist has his
way, we must choose which norm or value is right and which is
thereby wrong. This approach seems to condemn us to intolerance and
conflict – not very useful either for genuine understanding of the
Other as Other5 or for efforts to avoid cultural imperialism, much less
warfare.
The ethical pluralist, finally, hopes to avoid such intolerance and
conflict by way of arguing that both cultures share a notion of
“privacy,” but this notion is understood and practiced in different ways
– ways that are directly shaped by each culture’s distinctive traditions
and assumptions. Ethical pluralism thus argues for a middle ground
between relativism and monism. Yes, norms and values vary from
culture to culture – but, contra monism, this does not necessarily
mean that only one cultural norm can be right and the other wrong:
both can be correct as instances of different interpretations of a shared
norm. Contra relativism, because varying cultural norms may thus
instantiate a shared norm, cultural variations of this sort do not
necessarily mean that there are no universally legitimate values or
norms. Rather, the pluralist can argue that in this way privacy –
however widely understood and practiced in diverse cultures and
times – indeed appears to be a human universal (Hongladarom 2007,
110f.).
More recently, Hongladarom has argued for a similar pluralistic
approach to the irreducible differences between Buddhists (again, who
regard the self as a pernicious illusion) and Confucians (who believe in
some form of the self as a reality) vis-à-vis the basic ethical norm of
respect: both can agree that “an individual person is [to be] respected
and protected when she enters the online environment” (2017, 155).
To be sure, not all of our ethical differences will be resolved through
pluralism: again, the contrast between the EU and China on this point
may well be simply irresolvable. But, given that ethical pluralism
works in at least some cases, including the case of privacy, whenever
we encounter strong differences in cultural norms and practices, we
cannot simply assume that our options for dealing with them are
either ethical relativism or ethical monism.
Philosophical and sociological
considerations: New selves, new “privacies?”
These dramatic changes in our conceptions of selfhood, privacy, and
privacy law are part of a still larger discussion. Not surprisingly, there
is considerable debate among philosophers in information and
computing ethics regarding the nature and possible justifications of
privacy, justifications for its protection, etc. (e.g., Rachels 1975; Tavani
2013, 131–73). Herman Tavani helpfully summarizes three basic kinds
of privacy. The first of these is accessibility privacy (freedom from
unwarranted intrusion). This notion of privacy, also formulated as the
right to “being let alone” or “being free from intrusion,” was defended
in the landmark paper by Samuel Warren and Louis Brandeis (1890) –
who thereby made the first explicit claim in the United States that
privacy exists as a legal right (Tavani 2013, 135; cf. Glancy 1979).
Second, decisional privacy is defined as a freedom from the
interference from others in “one’s personal choices, plans, and
decisions” (Tavani 2013, 135f.). Such privacy, Tavani points out, has
been crucial in the US context in defending freedom of choice
regarding contraception, abortion, and euthanasia. Finally,
informational privacy is a matter of our having the ability to control
information about us that we consider to be personal (ibid., 136).
Tavani goes on to point out that both James Moor (2000) and Helen
Nissenbaum (2010) have developed accounts of privacy that seek to
include these three forms (Tavani 2013, 136–8). Moreover, from my
perspective, what is most helpful about both Moor’s and Nissenbaum’s
accounts is that they move us away from earlier, more spatial and
static conceptions of “privacy,” and foreground instead the relational
dimensions of privacy and private life – namely, the intuitions
developed above that our senses of “privacy” in “Western” societies are
shifting toward notions of “partial privacy” and “group privacy.”
Again, what we think/feel needs protection are those aspects of our
close relationships, as increasingly mediated through (analogue)
digital media, that increasingly define our sense of selfhood as more
relational than individual. In particular, Nissenbaum’s increasingly
influential theory of privacy as “contextual integrity” builds on an
explicitly relational conception of selfhood as introduced by the
philosopher James Rachels (1975). Nissenbaum thereby shows that
what is at stake in our privacy concerns is, first of all, precisely the
context of specific relationships within which a given bit of
information is exchanged. These contexts or “spheres of life” include
education, the marketplace, political life, etc. (Tavani 2013, 138). Each
of these contexts entails, first, its own “norms of appropriateness” that
“determine whether a given type of personal information is either
appropriate or inappropriate to divulge within a particular context”
(ibid.). At the same time, each context is further accompanied by its
own “norms of distribution [that] restrict or limit the flow of
information within and across contexts” (ibid.).
Moreover, these relational contexts are not fixed but, as with
relationships themselves, dynamic: the multiple contexts of our
relationships are subject to constant renegotiation and reformulation.
This happens, for example, when one or more of the persons
constituting a communicative cohort feels his or her “privacy” or
private life has somehow been breached by disclosures others have
made. A common example of how this works in practice is
documented by Stine Lomborg (2012). Lomborg analyzed the
communicative interactions of a prominent Danish blogger and her
audience. On occasion, either the blogger or one of her readers
revealed something that was received as rather too personal, too
individually private. This occasioned a renegotiation process that, in
response to the violation of the contextual norms of the blog (to use
Nissenbaum’s term), more articulately redefined the “line between
what is appropriate to share and what is too private” (Lomborg 2012,
429).
Such relational conceptions of privacy are further consistent with a
third understanding of selfhood and identity – namely, that of
relational autonomy. As the name implies, relational autonomy
stands as a middle ground between more strongly individual and more
strongly relational senses of self. Recall here that a strongly individual
autonomous or free self is the foundation of modern Western
conceptions of democratic norms (equality, respect, fairness, justice),
rights (life, liberty, the pursuit of property – as well as freedom of
expression, privacy, and so on), and thus the defining processes of
debate and deliberation. To fully abandon such a self in favor of a
purely relational self is thereby to eliminate any grounds for holding to
such norms, rights, and processes. Especially, feminist philosophers
worry about such a loss: whatever the enormous sins and faults of
modern liberalism (e.g., as “hyperindividualist,” exclusively
rationalist, and thereby overly “masculine” [Christman 2003, 143]),
since the Enlightenment articulation of this freedom and affiliated
rights, women’s emancipation and gradual moves toward greater
equality have centrally depended on these conceptions.
Relational autonomy thus emerges in order to sustain the emphasis
on autonomy while recognizing the realities and benefits of
relationality. So John Christman characterizes relational autonomy as
taking on board “relations of care, interdependence, and mutual
support that define our lives and which have traditionally marked the
realm of the feminine” (Christman 2003, 143). Andrea Westlund adds
that the capacities for reflective endorsement of both one’s own acts
and the acts of others as one such set of skills or abilities, noting that
these “must be developed during a relatively long period of
dependence on parents and other caregivers” (2009, 26). Moreover,
autonomy itself remains relational as it “requires an irreducibly
dialogical form of reflectiveness and responsiveness to others” (ibid.).
Contra more strongly individualistic conceptions of freedom that
stress that we are free only as we are free from the influences of and
commitments to others (e.g., in early modern philosophers such as
Thomas Hobbes), relational autonomy foregrounds ways in which we
are more free through relationship with others: “Some social
influences will not compromise, but instead enhance and improve the
capacities we need for autonomous agency” (ibid., 27). (We will also
see in chapter 6 that relational autonomy, along with more traditional
relational selves, is tightly conjoined with virtue ethics as an
increasingly central ethical framework that complements
utilitarianism and deontology in critical ways.)
Is all of this complicated? Yes, of course – especially as we keep in
mind that all of this is thus continuously under further development
and refinement. But as the development and expansion of Chinese-
style surveillance and Social Credit Systems – largely supported by
more fully relational selves – should make chillingly clear, nothing less
is at stake here than how we understand ourselves as human beings,
and thereby what kinds of freedoms and rights we may – or may not –
have and make claim to. Thereby, nothing less is at stake than how we
are best to live, including determining what social and political
institutions are best suited to the best possible lives.
And so, to paraphrase Socrates: whether we find ourselves in a
swimming pool or the ocean, we must start swimming nonetheless.
Happy swimming!
REFLECTION/DISCUSSION/WRITING QUESTIONS
1 How would you define “privacy” and/or private life? It may be
helpful here to think of what sort of “things” – acts, events, behaviors,
internal notions, imaginations, “information flows” (what kinds?), etc.
– you think of as:
“private” in a strongly individual way;
“publicly private” / “privately public” (e.g., as shared in what Lomborg
characterizes as “public personal spaces” [2012, 428]), and/or what
you would share within a specific group, e.g., via Snapchat, WhatsApp,
etc.; and
“public.”
It may be further helpful to review the following selection from the EU
Data Privacy Directives, as a detailed listing of what sorts of “things”
are considered private and thus as protected information: it is
forbidden to process personal data revealing “racial or ethnic origin,
political opinions, religious or philosophical beliefs; trade-union
membership; genetic data, biometric data processed solely to identify
a human being; [and] health-related data.”6
That is, would you agree with or want to modify this list – if the latter,
how?
2 Discuss, as clearly and precisely as possible:
(a) What kind(s) of privacy (and private life) do you believe to be most
important – especially in terms of the three sorts of privacy described
by Tavani?
(b) Given your account of privacy, do you want to justify privacy as an
intrinsic and/or an extrinsic good? If extrinsic, then what is privacy
“good for” – that is, for what other (and, ultimately, intrinsic) goods
does it serve as a means?
(c) What additional sorts of justification(s) can you provide for privacy
as you have defined it?
3 Discuss, as clearly and precisely as possible, how far your
understanding of privacy and private life (privatlivet and intimsfære)
seems to be dependent on:
more individual notions of selfhood and/or more relational notions –
and/or both, i.e., some form of relational autonomy?
more static conceptions of (individual or familiar) spaces and/or more
dynamic conceptions (such as Nissenbaum’s) of “privacy” as
contextual – that is, referring to specific “personal spaces” constituted
by a given set of communicative engagements between a (relatively
defined) set of (relational) persons?
4. Can you discern how far your approach to privacy is shaped by
utilitarian arguments (such as those at work, especially, in the US
context) and/or by deontological arguments (as more characteristic of
EU approaches, for example)?
5. We have seen that Soraj Hongladarom argues for a Thai notion of
privacy that rests on especially Buddhist understandings of the self. As
we might expect, Hongladarom goes on further to argue for correlative
data privacy protections – protections that might seem limited as
compared with contemporary Western (especially EU) laws, but are
nonetheless recognizable as protections justified for the sake of
participating in democratic governance, for example.
But Hongladarom goes still further. He draws on the Buddhist analysis
of human discontent as rooted in the ego-illusion to point out:
Violating privacy is motivated by what Buddhists call mental
defilements (kleshas), of which there are three – greed, anger, and
delusion. Since violating privacy normally brings about unfair material
benefits, it is in the category of greed. In any case, the antidote is to
cultivate love and compassion. Problems in the social domain,
according to Buddhists, arise because of these mental defilements, and
the ultimate antidote to social problems lies within the individuals
themselves and their states of mind.
(Hongladarom 2007, 120)
In other words, from a Buddhist perspective, if we want to enjoy
privacy protections, then we must go beyond (negative) laws that
largely tell us what not to do (most simply, don’t violate others’ rights
to privacy) to important positive ethical injunctions that tell us what
to do – namely, to pursue enlightenment (in the form of overcoming
the ego-illusion), in part through cultivating love and compassion for
others.
As we will see in chapter 6, this recommendation is characteristic not
simply of Buddhism but of virtue ethics in the Western tradition. It
further resonates, of course, with “the Golden Rule” – in Christian
formulation: do unto others as you would have them do unto you. But,
of course, the Golden Rule is central to the three Abrahamic faiths of
Judaism, Christianity, and Islam – and, indeed, some argue, is found
throughout the world, beginning with Confucian traditions. At the
same time, this recommendation reflects a Buddhist understanding of
identity as primarily relational. As with the Japanese injunction to
attune our acts toward the harmony of the group, the approach here
resolves privacy issues first by having us (re)shape ourselves to
harmonize better with others by reducing greed and increasing
compassion and love.
(A) How persuasive (or not) do you find Hongladarom’s arguments
and recommendations regarding privacy – including the positive
injunction to minimize greed and maximize compassion? Be as clear
as you can about your arguments/ evidence/reasons and/or other
grounds for your response(s).
(B) As we have seen, the national and cultural traditions surrounding
us have a significant influence on our conception of selfhood (more
individual, more relational?) and thereby on our ethical values and
approaches to ethical decision-making. How far can you trace your
(dis)agreements with Hongladarom to the cultural and national
traditions that have shaped your ethical views and sense of selfhood?
That is, if you agree with Hongladarom, is this solely because you
likewise have grown up in a culture more shaped by relational
emphases of selfhood and/or because you are already convinced of the
truths of Buddhism? And/or, if you disagree with Hongladarom, is this
solely because you have grown up in a culture shaped by more
individual emphases of selfhood and/or remain convinced of the
truths of other traditions?
And/or: can you find other reasons/grounds/evidence, etc., for your
(dis)agreement(s) with Hongladarom, beyond those reasons, etc., that
may hold legitimacy primarily in one culture but not in another?
SUGGESTED RESOURCES FOR FURTHER RESEARCH/ REFLECTION/WRITING
1. Culture?
We have begun to explore how diverse cultures (including as they
change over time) correlate with our basic assumptions regarding
selfhood and human nature (beginning with more individual vis-à-vis
more relational) and thereby our likely initial attitudes toward
“privacy.” In the next chapter, we will expand on these correlations
regarding our basic assumptions about property (as a start, whether
more individual-exclusive vis-à-vis more relational-inclusive). At the
same time, I have emphasized that we should always keep in mind that
“culture” and such cultural characterizations are to be treated as
heuristics – initial rules of thumb that are useful, indeed essential, for
how we first encounter and interpret the behaviors, choices, actions,
etc., of those from cultural backgrounds different from our own. They
are not to be treated, that is, as some sort of “essentialist” or
deterministic generalization that categorizes all members of a given
“culture” wholesale within the same box.
It will then be helpful to reflect more carefully on what you think
“culture” may be, and how your own background culture shapes your
own basic assumptions, behaviors, etc. – and how it may not: perhaps
you’re an exception to the generalizations – perhaps because you have
consciously reflected on and rejected one or more aspects of your
background culture?
For example, many people in my class and generation grew up within
the strongly racist environments of the 1950s–60s United States. But
many of us also consciously chose to reject racism as best we could, in
the name of basic democratic norms and values such as equality and
respect. More broadly, especially those of us privileged to travel and
study abroad often find that these experiences give us a new
perspective on our “culture of origin” – and while we may embrace
certain aspects of our home culture all the more warmly as a result, we
may also seek to reduce or eliminate one or more elements of that
culture. In my case, the Scandinavian countries, while by no means
perfect on this point, enjoy the highest levels of equality and gender
equality in the developed world (World Economic Forum 2016, 9f.).
Living and experiencing with what this means in everyday practices
and attitudes – for example, a much higher proportion of women in
politics and other forms of cultural leadership, as well as in everyday
workplace roles, from bus driver to police woman – starkly contrasts
with the more hierarchical cultures of the US (as well as Germany, the
UK, etc.). Experiencing such equality as an everyday reality thus helps
fuel my efforts to reject various norms and practices of gender
inequality as part of my background culture.
One of the most comprehensive and intriguing accounts of such
cultural norms and differences has been developed over the past 30
years or so by the World Values Survey (WVS: worldvaluessurvey.org).
Review WVS “Findings and Insights”
(www.worldvaluessurvey.org/WVSContents.jsp), including the
discussion of:
Traditional values versus Secular-rational values
How Culture Varies
Aspirations for Democracy
Empowerment of Citizens
Globalization and converging Values
Gender Values
Religion, and
Happiness and Life Satisfaction
Identify on the most recent “Cultural map” – namely, “WVS wave 6
(2010–14)” – where you, and/or your cohort in a discussion group,
come from.
A. Given your understanding of what the map indicates regarding
basic cultural attitudes – especially in contrast with other neighboring
http://worldvaluessurvey.org
http://www.worldvaluessurvey.org/WVSContents.jsp
and/or distant countries/cultures:
How far do you agree and disagree with the characterizations of your
home culture(s)?
B. Presuming you’ve had the opportunity to explore one or more of the
“other” countries/cultures represented in the WVS:
How far do you agree and disagree with the characterizations of these
“other” culture(s)?
C. What does all of this tell you regarding how far such cultural
generalizations may and/or may not be accurate (e.g., for whom and
how many in a given country/culture)?
D. What does all of this tell you regarding how far such cultural
generalizations may – and may not – be useful or helpful in our efforts
to understand not only our own backgrounds, but also those we may
be privileged to meet in the course of our travels, life, and work?
E. Is there anything else that strikes and/or occurs to you here?
2. The privacy paradox
Although survey results show that the privacy of their personal data is
an important issue for online users worldwide, most users rarely make
an effort to protect this data actively and often even give it away
voluntarily.
(Gerber, Gerber, and Volkamer 2018, 226)
A. The privacy paradox has now been extensively researched and
documented – and not only in Western societies which have
traditionally emphasized (individual) privacy rights. In addition, as
discussed above, Chinese attitudes toward privacy have shifted in
more individual directions – and so a recent paper uses classical
Western privacy theorists to explore the privacy paradox at work in
users of the prominent Chinese social media venue WeChat (Chen and
Cheung 2018).
(1) Wherever you may come from, then – what are your experiences
and impressions of “the privacy paradox?”
(2) In particular, what sorts of steps do you take to protect your
privacy – and how far do you likewise confirm the privacy paradox,
i.e., shy away from taking the more active measures (e.g., using
encryption technologies), financial costs (e.g., for better security
software), and so on that would enhance your digital security and
privacy?
(3) Some of your reasons may be culturally variable in important
ways. For example, my Norwegian students – living in a country with
the highest trust levels in the developed world7 – often say quite
simply that they trust the Norwegian state, the telecom operators, and
their ISP providers to protect their privacy adequately. More broadly,
citizens of EU states may also have some trust (though demonstrably
lower than in Norway) that they are being protected – first of all, by
the increasingly stringent regulations of the GDPR. By contrast, US
citizens are split on whether or not they can trust the Federal
Government to protect their privacy adequately (Smith 2017). Given
your specific national/cultural background – do you believe you have
good reason to trust your nation-state and your service providers to
protect your privacy and security adequately?If so, this would be a
good reason to worry less (perhaps). But if not – what other reasons
might you give for acting less to protect your privacy than might be
ideal and most desirable?
(See also the article by Hargittai and Marwick [2016] listed below as
an additional suggestion for reading and discussion.)
B. Given your views on what counts as privacy and private life, and
their relative importance:
(1) Are the current regulations in the country in which you find
yourself adequate for protecting what you take to be your rights to
privacy and private life? Or should there be stronger protections of
anonymity and privacy – whether by governments and/or by
corporations and service providers – even at the cost of (some) ease of
use and convenience?
(2) Are you, like most of the rest of us, prey to “the privacy paradox” –
and/or: are there more steps you might take to increase your privacy
and data security?See, e.g.:
*privacy not included: Shop Safe This Holiday Season,
https://foundation.mozilla.org/en/privacynotincluded
Surveillance Self-Defense: Tips, Tools and How-tos for Safer Online
Communications, https://ssd.eff.org
SUGGESTED RESOURCES FOR FURTHER RESEARCH/REFLECTION/WRITING
Fuchs, Christian (2011) An Alternative View of Privacy on Facebook,
Information, 2(1): 140–65.
Fuchs takes up a critical ethical concern, namely, how far our
identities are commodified – turned into material for sale, first of all
in the form of marketing information – through our use of social
networking sites (SNSs) such as Facebook. Drawing on both Marxian
frameworks of political economy and theorists such as Hannah Arendt
and Jürgen Habermas, Fuchs’s analysis is further important for
connecting privacy matters with central issues of democratic
governance.
Debatin, Bernhard (2011) Ethics, Privacy, and Self-Restraint in Social
Networking, pp. 47–60 in S. Trepte and L. Reinecke (eds.), Privacy
Online. Berlin: Springer.
Debatin provides an excellent summary of privacy conceptions and
law as background for discussing privacy matters on SNSs – again,
including the importance of privacy to democratic processes,
especially as influenced by the work of Habermas. His arguments for a
“privacy literacy” and an “ethics of self-restraint” can be usefully
compared to the approaches developed in this chapter, and especially
the recommendations from virtue ethics offered by Vallor (2009, 2011)
and Hongladarom (2007) explored above.
Hargittai, Eszter and Marwick, Alice (2016) “What Can I Really Do?”
Explaining the Privacy Paradox with Online Apathy, International
Journal of Communication, 10 (2016), 3737–57 1932–
8036/20160005.
Hargittai and Marwick find that, while young people very much
understand and care about privacy and privacy risks online, they
https://foundation.mozilla.org/en/privacynotincluded
https://ssd.eff.org
largely feel that they have little ability to manage these. This article
may further be helpfully compared with the more extensive and
international surveys reported by Gerber, Gerber, and Volkamer
(2018), as well as the Chinese case documented by Chen and Cheung
(2018).
Notes
1 I remain grateful to Niamh Ní Bhroin, University of Oslo, for first
pointing me toward this resource.
2 We easily recognize that some things are valuable primarily as they
serve as means to other goods or ends: so, commonly, many
students value their education as an extrinsic good – that is,
something that is valuable as a means to achieving some other
good, such as a job, a high salary, etc. But these in turn may be
simply extrinsic goods – that is, goods that are likewise valuable not
so much in themselves (e.g., few of us – unfortunately – think of
our work as an intrinsic good, as something worthwhile in itself,
whether or not we are paid for it). So it seems that, somewhere, the
chain of justifications for extrinsic goods must come to a rest at an
intrinsic good – something that is simply worthwhile in and of
itself. Or else, as Aristotle famously argued, we are faced with an
infinite regress of an extrinsic good being justified by a further
extrinsic good, etc. Then the difficulty becomes one of finding such
an intrinsic good – indeed, one that all of us would agree is valuable
in and of itself. But, as Aristotle further argued, eudaimonia – often
translated as “happiness,” but better translated as “contentment” –
is a good we all recognize as intrinsically valuable. That is, we may
well ask someone why they want to attend university – i.e., what
further good justifies such attendance if she or he believes that
attending university is only an extrinsic good. But we don’t seem to
need to ask why someone would want to be happy or content: that
is, happiness or contentment appears to be good in itself, and thus
does not require further justification as a means to some further
end.
3 Article 4(13), (14), and (15), and Article 9 and Recitals (51) to (56)
of the GDPR: https://ec.europa.eu/info/law/law-topic/data-
protection/reform/rules-business-and-organisations/legal-
grounds-processing-data/sensitive-data/what-personal-data-
considered-sensitive_en.
4 My thanks to Dan Burk for pointing me toward this resource.
5 The phrase “Other as Other” is intended to suggest that we
recognize the Other as fully equal, fully human, while
simultaneously irreducibly different from us. This draws from
Emmanuel Levinas’s analysis of “the Other as Other,” as a positive
“alterity” (e.g., Levinas 1987). By contrast, I use “other” – i.e.,
without a capital – to signal a viewpoint or perspective on the
“other” whose difference from ourselves at least initially inspires
suspicion, fear, and/or contempt for the other seen as inferior, etc.
This interpretation of difference between “us” and “them” is
familiar as the viewpoint of ethnocentrism and related perspectives
of racism, sexism, etc.
6 See note 3, above.
7 As measured in the World Values and European Values Surveys,
trust levels in the Scandinavian countries are the highest in the
world: 76 percent of Danes and 75.1 percent of Norwegians agree
that “most people can be trusted,” in contrast with, e.g., 35.1
percent for the United States (Robinson 2016).
https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/legal-grounds-processing-data/sensitive-data/what-personal-data-considered-sensitive_en
CHAPTER THREE
Copying and Distributing via Digital Media:
Copyright, Copyleft, Global Perspectives
[W]hen you share, post, or upload content that is covered by
intellectual property rights (like photos or videos) on or in connection
with our Products, you grant us a non-exclusive, transferable, sub-
licensable, royalty-free, and worldwide license to host, use, distribute,
modify, run, copy, publicly perform or display, translate, and create
derivative works of your content (consistent with your privacy and
application settings).
(Facebook Terms of Use [January 19, 2019], www.facebook.com/terms.php)
We work closely with our member record companies to ensure that
fans, parents, students, and others in the business have the tools and
the resources they need to make the right listening, purchasing and
technical decisions. We also work hard to protect artists and the music
community from music theft.
(Recording Industry Association of America [RIAA], “About Piracy,”
www.riaa.com/resources-learning/about-piracy)
“Free software” is a matter of liberty, not price. To understand the
concept, you should think of “free” as in “free speech,” not as in “free
beer.”
(The Free Software Definition [Richard Stallman / GNU Operating System],
www.gnu.org/philosophy/free-sw.html)
[C]opying may be an important living process for a Confucian Chinese
to understand human behaviour, to improve life through self-
cultivation and to transmit knowledge to the posterity.
(Yu 2012, 4)
http://www.facebook.com/terms.php
http://www.gnu.org/philosophy/free-sw.html
Chapter overview
I begin with an example that is likely far removed from the everyday
experiences of contemporary students in the so-called developed
world – namely, a CD as a music medium. This example is an
important starting point, however, for two reasons. One, CDs remain a
primary medium for music consumption in Latin America and Africa,
for example, and so the case-study will be directly useful for students
and readers in such domains. Two, the example remains foundational
for how we reflect and argue about intellectual property, property
rights, and thereby the conditions under which copying such materials
may or may not be ethically legitimate, whether in simple peer-to-peer
sharing networks or in the more complex discussions concerning
remix (cf. Latonero and Sinnreich 2014; Ess 2016). As a start: is
copying a digital soundfile like / the same as stealing a physical
artifact: and/or is copying (at least in some forms) not like stealing a
physical artifact, and so ethically justifiable, at least under some
conditions? This example is thus pedagogically useful as it introduces
the logical matters of analogical arguments and questionable
analogy.
I then describe US and European approaches to copyright law and
their important ethical differences. This leads to discussion of the so-
called copyleft approaches and important examples of their
application in the Free/Libre/Open Source Software (FLOSS)
movements. The terms and players in these debates have shifted
somewhat in recent years, but these movements and their arguments
remain central to contemporary debates over copyright. A second set
of reflection/discussion/writing questions helps us practice applying
these diverse approaches in conjunction with the ethical frameworks
of utilitarianism and deontology.
Lastly, we take up the cultural backgrounds and diverse cultural
traditions at work here – specifically, Confucian thought and the
(Southern) African framework of ubuntu. A set of
reflection/discussion/writing exercises attends to the ethical questions
occasioned by these cultural considerations, along with virtue ethics as
an increasingly prominent complement to utilitarianism and
deontology. Finally, a set of additional resources and advanced
exercises provides entry points into more contemporary debates over
copyright vis-à-vis remix, etc.
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS: STEALING VS.
ILLEGAL DOWNLOADING
(A) You and a friend are leaving a concert featuring local bands, one of
which really appeals to you. Happily, the band has a CD for sale at a
table at the back of the concert venue; less happily, at the moment you
can’t afford the purchase. No problem, your friend tells you: I’ll buy
one, and while the salesperson is distracted with recording the
purchase, I can easily take a second copy without anyone noticing. No
worries, I’ve done this before a million times, and never gotten caught,
s/he assures you.
As you consider the above scenario carefully …
1. What seem to be your options? That is, the scenario suggests an
either/or: either you steal the CD and, presuming you don’t get caught,
get to enjoy some great music for free – or you don’t. These may well
be your two primary options, but, in ethical analysis especially, it is
always a good idea to see if we are clear about all the realistic options,
not just the most obvious ones. (Note: after you’ve done this exercise,
you may want to review section 4 in chapter 6 on feminist approaches
to ethics and an ethics of care; see pp. 255f.)
2. Develop – individually, perhaps in group discussion, and/or as a
class – as fully as you can, the arguments, evidence, reasons, and/or
other grounds that support each of the options you describe in (1)
above.
3. Given that you have likely described at least two possible options,
each with reasonably strong supporting arguments, at this point, can
you provide any additional arguments, evidence, reasons, and/or
other grounds for a specific choice that help justify that choice as the
better of the available options?
4. (Optional: You may want to review at least the first two ethical
frameworks discussed in chapter 6, consequentialism/utilitarianism
and deontology. After doing so, return to the arguments, etc., that
you’ve provided above. Do you notice whether your arguments are
more consequentialist, perhaps utilitarian, and/or more deontological
in some way?)
OK, hold those thoughts …
(B) A good friend of yours is in a band that is struggling to gain
recognition and an audience. All the band members are just getting by
on their day jobs – the band as such doesn’t make enough money to
support any of the members full-time. The band has just produced a
new album, and they’re hoping that it will become a major hit. Like
most musicians, they offer a free sample track on their website –
hoping, of course, that this will lead to sales of the full album at the
going price of US$15.00. You are no stranger to illegally downloading
music from the internet. But, since you want to support the band,
you’ve gone ahead and paid the US$15.00 for your legal copy of the
full album.
1. While, in this circumstance, you are willing to pay the US$15.00
required for downloading a legal copy of the album, presume that you
also think that under some circumstances it’s OK to download music
from the internet illegally. With regard to the later case(s), what are
your arguments, evidence, reasons, and/or other grounds for
justifying such illegal downloading? (NB: this question assumes that a
strong ethical justification both is distinct from, and, indeed, may
override, arguments based exclusively on current law.)
2. Presuming that you’ve now marshalled some good arguments, etc.,
that justify at least some sorts of illegal downloading – what
arguments, evidence, reasons, and/or other grounds come into play in
the instance of your deciding not to download illegally the full album
of your friend’s band?
3. (Optional: Again, you may want to review at least the first two
ethical frameworks discussed in chapter 6,
consequentialism/utilitarianism and deontology. After doing so,
return to the arguments, etc., that you’ve provided above. Can you
discern whether your arguments are more consequentialist, perhaps
utilitarian, and/or more deontological in some way?)
(C) Another friend who likes the band’s free sample track asks you if
you’d mind making a copy of your copy of the album, so that he can
either:
(i) highlight the band’s music at an upcoming party where he’s going
to provide the music – in part, so that the album might generate a few
more sales; and/or
(ii) make copies of the album to give to friends of his who are also
interested in the music; and/or
(iii) put a copy of the album on his computer so that it is available to
others on the internet, using one of the current peer-to-peer file
sharing networks; and/or
(iv) all of the above.
1. If you think that you might agree to (i), but not to (iii), explain as
best you can:
(a) what the relevant differences are between these two scenarios; and
(b) what arguments, etc., you can provide that can justify your ethical
position in both cases.
2. What is your response to (ii) – that is, something of a middle
ground between (i) using copies of music to help the band, it is hoped,
by generating sales; and (iii) making copies freely available to anyone
interested on the internet, which might well lead to a reduction of the
band’s sales of its new album? Again, for our purposes, whatever your
response here, what is important is your analysis of the choice/action
and the arguments, etc., that support it.
3. (Optional: Again, as with the optional questions above, you may
want to review the arguments developed here vis-à-vis the ethical
frameworks of consequentialism/utilitarianism and deontology, if only
to discern which set of arguments you tend to use – so far.)
4. It is likely that at least some of your class would have responded to
the scenario described in (A) – i.e., the possibility of stealing a CD
after a local concert – by arguing that this would not be a good idea.
There are at least two likely arguments here: one consequentialist
(even if the risk of getting caught is small, the consequences of getting
caught are potentially catastrophic, and so it’s better not to take such a
chance); and a second, more deontological argument (stealing is
simply wrong, even if by stealing you might gain something desirable
and enjoyable).
By contrast, there will likely be many members of the class who are
perfectly happy to download, say, a song track or two from a famous
(and wealthy) artist whose work is distributed by equally well-to-do
multinational corporations. Here, at least in my experience, the
arguments tend to be primarily consequentialist – for example, the
chances of my getting caught are extremely small, and the very modest
profit that both the artist and the multinational corporation lose by my
not paying for a legal copy will never be missed by either, since both
are already so financially well-off.
(a) Are there any additional arguments that occur to you and/or others
in your group/class that work to justify not stealing in the first case,
but do justify illegal downloads in the second case?
(b) Given the arguments that you uncover here, do these arguments
always derive from the same framework? Again, it may be that the
arguments against stealing a physical copy of a CD include
deontological arguments, while arguments for illegal downloading are
primarily consequentialist.
If this is the case, then the disagreement between these two cases runs
beyond the first-order level of what we are to do in a particular
instance: the disagreement includes a second-order or “meta-
theoretical” difference as to which ethical framework(s) we are to
make use of (i.e., either consequentialism and/or deontology – and/or
any of the additional frameworks described in chapter 6).
(c) At this point, it may be sufficient simply to notice these differences,
so far as they seem to be at work – and observe that, if our arguments
do derive from different frameworks, then perhaps there is not quite
the contradiction that may first appear to be the case (i.e., between
disapproving of stealing a CD physically while approving of illegally
downloading a virtual copy of one).
That is, if our arguments against and for (respectively) these forms of
stealing derive from different frameworks, then to say that there’s a
contradiction here is like saying that there’s a contradiction between
the rules of American baseball and the rules of European soccer. This
doesn’t make immediate sense: it seems rather that, because these are
two different games played under two different sets of rules, there can
be no serious contradiction between them.
While this observation would relieve us of a first-order contradiction,
it nonetheless still leaves us with a second-order question – namely,
how do we justify – or, to use Aristotle’s suggestion, judge (i.e., use
phronēsis) – using a specific framework in one instance and another
framework in a different instance?
Thoughts?
(D) As we proceed in applying familiar ethical frameworks to the
ethical challenges evoked by new technologies, we inevitably proceed
by way of analogy. And so, in the scenarios described above, I have
suggested an analogy between physically stealing a copy of a CD
following a local concert and illegally downloading a copy from the
internet.
Just to make it explicit – an analogy argument based on the above
scenarios might look like this:
We agree that stealing a physical CD after a concert is wrong.
Downloading an illegal copy of a music album is like stealing a
physical CD.
Therefore, downloading an illegal copy of a music album is also
wrong.
But, as good logicians know, every analogy runs the risk of becoming
questionable. Such an analogy, rather than helpfully leading us to
justifiable conclusions, may instead mislead us. Happily, you don’t
have to be a logician to see how this is so: rather, as a start, we can
draw on the idiomatic phrase “comparing apples and oranges.” That is,
we sometimes recognize rather easily that a given analogy or
comparison is actually false or misleading somehow – in part because
the comparison in fact holds together two radically different sorts of
things (the apple and the orange). Given that the argument here rests
on such a questionable comparison – its conclusion (in this case,
downloading an illegal copy is also wrong) is hence not strongly
supported.
So, especially if you disagree with the conclusion in the above
argument – that illegally downloading a copy of an album on the
internet is ethically wrong – you might be able to make your case by
arguing for one or more important differences between the two
scenarios that are held together in the analogy argument.
So: are there important, ethically relevant differences between these
two scenarios – and, if so, what are they?
The ethics of copying: Is it theft, Open Source,
or Confucian homage to the master?
Intellectual property: Three (Western) approaches
As we saw in chapter 1, there are a number of characteristics of digital
media that make the copying and distribution of various kinds of
information – whether representing software, a text, a song, or video,
etc. – much easier than with analogue media. That is, once we have
access to the various components required – access that, despite the
grave difficulties of the “digital divide,” is likewise growing rapidly
around the world – copying and distributing a file in digital format is
both trivially easy and all but cost-free.
Moreover, the general rules, guidelines, and laws applicable to such
copying are wide-ranging and frequently shifting. In its ongoing battle
against illegal music downloading, the entertainment industry
relentlessly lobbies for more stringent laws intended to stop (or at
least slow down) widespread distribution of music and video files on
the internet via peer-to-peer (p2p) file-sharing networks. These
industries have likewise pushed for digital rights management (DRM)
and copy protection schemes also designed to prevent illegal copying
or piracy. Such efforts are backed in the United States by the Digital
Millennium Copyright Act (DMCA) of 1998. According to critics such
as the Electronic Freedom Foundation (EFF), “DRM has proliferated
thanks to the Digital Millennium Copyright Act of 1998 (DMCA),
which sought to outlaw any attempt to bypass DRM”
(www.eff.org/issues/drm). They go on to note:
Corporations claim that DRM is necessary to fight copyright
infringement online and keep consumers safe from viruses. But there’s
no evidence that DRM helps fight either of those. Instead DRM helps
big business stifle innovation and competition by making it easy to
quash “unauthorized” uses of media and technology.
(Ibid.)
And worse: Boetema Boateng (2011) is but one of many severe critics
of DRM as a means of maintaining US dominance in the
entertainment industry, especially to the disadvantage of developing
countries, as we will explore more fully below.
A potentially important development in these struggles is the
establishment of the Pirate Party. Most readers will know that Pirate
Bay is one of the primary sites for sharing files via peer-to-peer (p2p)
networking, using BitTorrent or similar file-sharing software. Despite
ongoing blocking efforts worldwide, the Pirate Bay website is alive and
well – sort of. The site itself is often down or blocked; alternative
approaches via proxies pop up on a daily basis (Moseley 2019). Along
the way, the Pirate Party was founded as a political party, first in
Sweden in 2006. The basic principles of the Party are clear: reform of
copyright law, abolition of the patent system, and respect for the right
to privacy (www2.piratpartiet.se/international/english). The Party’s
most striking success has been in Iceland: in 2016, the Party won 10
Parliamentary seats out of 63
(https://en.wikipedia.org/wiki/Pirate_Party_(Iceland)). Since then,
however, the Party has apparently receded somewhat. How far the
Pirate parties might manage to transform the current laws regarding
copyright and patents thus remains very much an open question.
The polarities exemplified by the RIAA (Recording Industry
Association of America) vs. the EFF and the Pirate parties in fact entail
at least three major positions or streams of response that we can
consider as ethical responses to these sorts of dilemmas.
http://www.eff.org/issues/drm
http://www2.piratpartiet.se/international/english
https://en.wikipedia.org/wiki/Pirate_Party_(Iceland)
(a)Copyright in the United States and Europe
To start: as Dan Burk (2007) characterizes it, intellectual property (IP)
law in the United States is shaped by a utilitarian ethic (see chapter
6), one that argues that copyright and other forms of intellectual
property protection are justified as these contribute to the larger
public good over the long run. That is, proponents of this view believe
that authors, artists, software designers, and other creative agents will
take the trouble to innovate and develop new products and services
that will benefit the larger public only if those agents can themselves
be assured of a significant personal reward in terms of money or other
economic goods. This means in practice, however, that it is principally
the industries that have a strong economic interest in copyright and
other protections that hence argue and lobby for such protections.
Indeed, the interests and possible benefits of the individual agent are
secondary in this view. Given its utilitarian framework, “The rights of
the author should at least in theory extend no further than necessary
to benefit the public and conceivably could be eliminated entirely if a
convincing case against public benefit could be shown” (Burk 2007,
96).
By contrast, European approaches to copyright can be characterized as
more deontological in character. As Burk puts it:
copyright is justified as an intrinsic right of the author, a necessary
recognition of the author’s identity or personhood. … the general
rationale for copyright in this tradition regards creative work as an
artefact that has been invested with some measure of the author’s
personality or that reflects the author’s individuality. Out of respect
for the autonomy and humanity of the author, that artefact deserves
legal recognition.
(Ibid.; emphasis added)
Burk suggests that we are thus caught in an international competition
between the US and the EU as to which of these approaches to
copyright will prevail – with the US currently dominating (ibid., 99–
100).
The US dominance in these domains is criticized across the globe for
numerous reasons – including, for example, how the US frameworks
and market dominance work to the disadvantage of developing
countries and especially indigenous peoples. Specifically, the US
presumptions of music, film, and so on as individual and exclusive
property directly contradict cultural frameworks and traditions that
treat many forms of property as public goods instead: these, by
default, are openly sharable by community members (Boateng 2011).
Such conceptions of property as shared and inclusive rather than as
individual/exclusive are at the heart of a third ethical response to
these sorts of dilemmas – namely, copyleft/FLOSS.
(b)Copyleft/FLOSS
Alternatives to what are seen as excessively restrictive conditions,
especially on the development and use of computer software, have
been developed – initially under the rubric of Free and Open Source
Software (FOSS). The more inclusive acronym, FLOSS –
Free/Libre/Open Source Software – now predominates, in recognition
that much of the interest and work here operates in Latin-speaking
countries (primarily Latin America and the francophone countries).
This rubric, in fact, conjoins two important but conflicting
philosophical and ethical frameworks – those of the free software (FS)
movement, affiliated with Richard Stallman and the Free Software
Foundation, and those of the subsequent Open Source Initiative (OSI),
begun in 1998 by Eric Raymond and others. Both share the common
goal of fostering the development of software to be made freely
available for others to copy, use, modify, and then redistribute. But the
free software movement began in conscious opposition to commercial
development of profit-oriented proprietary software and the copyright
schemes seen to protect such software. By contrast, the Open Source
Initiative aimed toward making free software more attractive to for-
profit businesses. These differences are significant for more recent
shifts from copyleft to “pragmatic openness” (Aufderheide et al. 2019).
But, for now, we can begin to explore FLOSS by way of Stallman’s
basic definition of what “free” in “free software” means:
Free software is a matter of the users’ freedom to run, copy, distribute,
study, change and improve the software. More precisely, it refers to
four kinds of freedom, for the users of the software:
The freedom to run the program, for any purpose (freedom 0).
The freedom to study how the program works, and adapt it to your
needs (freedom 1). Access to the source code is a precondition for
this.
The freedom to redistribute copies so you can help your neighbor
(freedom 2).
The freedom to improve the program, and release your
improvements to the public, so that the whole community benefits
(freedom 3). Access to the source code is a precondition for this.
A program is free software if users have all of these freedoms. Thus,
you should be free to redistribute copies, either with or without
modifications, either gratis or charging a fee for distribution, to
anyone anywhere. Being free to do these things means (among other
things) that you do not have to ask or pay for permission.
(The Free Software Definition [Richard Stallman / GNU Operating System],
www.gnu.org/philosophy/free-sw.html)
Ethically, what is interesting here is the justification for such freedom
in terms of benefits to the whole community. Rather than relying on
copyright schemes as oriented toward either economic incentives (US)
or protecting authorial rights (EU), the free software movement begins
with the community good as justifying the conviction that the
potential benefits of computer software (and information more
generally) should be shared as broadly and equally as possible.
To understand this properly, we first need to understand that
“property right” primarily means a right to access and use something
– whether a material item (your pen, backpack, computer, bicycle,
etc.) or something less material, including “intellectual property” (an
author’s words or a computer programmer’s code). Our increasing
preferences to use streaming services for music and films, such as
Spotify or Netflix, highlight such rights to access: consumers of such
services seem less interested in owning physical entertainment sources
such as CDs or DVDs – but instead are satisfied with access to
entertainments on demand. Given that property means first of all a
right to access, we can distinguish between copyright and
http://www.gnu.org/philosophy/free-sw.html
copyleft/FLOSS approaches in terms of exclusive and inclusive
property rights. Briefly, the US and European copyright approaches
tend to presume individual and exclusive property rights. That is,
property rights (of access and use) belong to the individual owner: the
“default setting” of such exclusive rights is that the owner has the right
to exclude others from use of and access to his or her property.
Copyleft/FLOSS approaches, by contrast, involve notions of inclusive
property rights. So, in Richard Stallman’s definition of free software
(quoted above), the starting point is the users’ freedom – i.e., a
community of software users – not the individual’s right to exclude
others from use and access.
Similarly, the Creative Commons (CC) approach, while recognizing
and protecting individual rights (“some rights reserved”), does so in a
way that is inclusive: “by default” CC recognizes the rights of others to
access and use property. So the Creative Commons “Attribution-
Noncommercial-ShareAlike 3.0 United States” license reads:
You are free to:
Share – copy and redistribute the material in any medium or format
Adapt – remix, transform, and build upon the material
The licensor cannot revoke these freedoms as long as you follow the
license terms.
Under the following terms:
Attribution – You must give appropriate credit, provide a link to the
license, and indicate if changes were made. You may do so in any
reasonable manner, but not in any way that suggests the licensor
endorses you or your use.
NonCommercial – You may not use the material for commercial
purposes.
ShareAlike – If you remix, transform, or build upon the material, you
must distribute your contributions under the same license as the
original.
No additional restrictions – You may not apply legal terms or
technological measures that legally restrict others from doing anything
the license permits.
(https://creativecommons.org/licenses/by-nc-sa/3.0/us)
That is, individuals retain their “moral rights” – including a right to
exclude others from using one’s property for others’ commercial
advantage. At the same time, however, others’ rights of access and use,
such as copying, distributing, and remixing an individual’s property,
are likewise granted under this license: the individual owner’s rights
are in this way inclusive rather than exclusive.
We will see as we turn to the cultural backgrounds at work here that a
wide range of non-Western traditions and approaches to property –
beginning with the (Southern) African tradition of ubuntu (next
section) – likewise stress inclusive rather than exclusive rights. We
will further see a striking middle ground in Scandinavian laws and
practices concerning “all peoples’ rights” (allemannsretten) regarding
access to “nature” at large – including otherwise private property.
FLOSS in practice: the Linux operating system
In the early 1990s, Linus Torvalds developed a variant of the UNIX
operating system (OS)1 that was intended for free distribution from
the outset – and this in the free software sense: Torvalds distributed
his software under Stallman’s GNU General Public License.
Subsequently, a great deal of FLOSS work focused on the development
and distribution of the Linux operating system and affiliated
applications.
Linux has become an increasingly mainstream OS, in part driven by an
ever-growing array of diverse software packages and applications.
Linux distributions, as compared with Windows and Macintosh
operating systems, demand less computing power and are hence well
suited to increasingly popular computers such as the Raspberry Pi.
Linux also (usually) runs well on older computers that can no longer
run current versions of Windows or Macintosh: Linux thus contributes
to extending the lifetime of these devices and thereby reducing
“eWaste” – the highly problematic stream of discarded electronic
devices whose disposal and recycling can result in devasting human
and environmental consequences (BusinessGhana 2018).
https://creativecommons.org/licenses/by-nc-sa/3.0/us
The Ubuntu distribution is one of the most popular “distros” of Linux.
The Ubuntu website defines its mission as follows:
To bring free software to the widest audience. In an era where the
frontiers of innovation are public, and not private, the platforms for
consuming that innovation should enable everyone to participate.
(www.ubuntu.com/community/mission)
This mission is justified in part by an appeal to ubuntu as a term and
concept:
Ubuntu is an ancient African word meaning “humanity to others”. It is
often described as reminding us that “I am what I am because of who
we all are.”
(www.ubuntu.com/about)
To say this a little more fully: such “humanity to others” and our
understanding that our identity is inextricably interwoven with those
around us express the relational senses of identity we have seen to be
characteristic of non-Western (and, increasingly, Western) societies.
Ubuntu Linux is developed and delivered by Canonical Ltd. (among its
other FOSS projects). Canonical is explicitly rooted in: (a) Open
Source and the ten “core principles of open-source software” as
defined by the Open Source Initiative; and (b) the four freedoms of the
Free Software Foundation (www.ubuntu.com/community/mission).
The clear intersection between Open Source, Free Software, and the
ubuntu tradition is precisely the emphasis on inclusive rather than
exclusive property rights – and for the sake of benefiting one’s
neighbors and the larger (indeed, now worldwide) community.
Ubuntu Linux hence directly reflects the greater emphasis on
community well-being that characterizes indigenous (Southern)
African cultural values. At the same time, you may remember that, in
chapter 2, we saw this emphasis on the larger community as a
characteristic of other cultural traditions – especially Confucian and
Buddhist traditions – and its consequences for non-Western
conceptions of privacy. We are now starting to see how this emphasis
on community well-being has equally crucial consequences for our
notions of property, and thereby such common acts as copying and
http://www.ubuntu.com/community/mission
http://www.ubuntu.com/about
http://www.ubuntu.com/community/mission
distributing via digital media. We will return to culture and ethics in
the next section, as we consider Confucian thought and copyright.
FLOSS in practice
Beyond operating systems such as Linux, the FLOSS movements have
produced even more popular applications such as the Firefox web
browser and the Thunderbird email client, which run on Windows,
Macintosh, and Linux machines. The contemporary office suite
LibreOffice extensively duplicates the functionalities of Microsoft’s
Office software – and again runs on all three operating systems. Such
software is hence attractive not only to young people and university
students with limited finances; it is further argued to be critical to
overcoming the “digital divide” and to exploiting digital media for the
sake of development – while also preserving cultural diversity contra
the dominance of Western, especially US-based, corporations
(www.libreoffice.org/about-us/who-are-we). Strikingly, LibreOffice is
used by governmental agencies in Latin-speaking countries – as well
as in Taiwan (www.libreoffice.org/discover/who-uses-libreoffice).
But the ethical sensibilities and applications of FLOSS are not limited
to computer applications: they have generated other fruitful – perhaps
even essential – kinds of sharing online. The obvious example is
Wikipedia (www.wikipedia.org). Wikipedia invites more or less
anyone to not simply read, but also actively write for and contribute
other forms of media to, a given webpage. The motto “Imagine a world
in which every single human being can freely share in the sum of all
knowledge” (https://wikimediafoundation.org) is clearly in the spirit
of FLOSS – and has resulted since its founding in 2001 in a
remarkable resource: by 2015, the website hosted “more than 40
million articles in 301 different languages”
(https://en.wikipedia.org/wiki/Wikipedia).
In contrast to traditional copyright schemes, Wikipedia uses a hybrid
scheme – one that incorporates the GNU Free Documentation License
(version 1.3) developed by the Free Software Foundation (FSF), but
prioritizes the Creative Commons Attribution-ShareAlike 3.0
Unported License (CC-BY-SA)
(https://en.wikipedia.org/wiki/Wikipedia#Content_licensing).
http://www.libreoffice.org/about-us/who-are-we
http://www.libreoffice.org/discover/who-uses-libreoffice
http://www.wikipedia.org
https://wikimediafoundation.org
https://en.wikipedia.org/wiki/Wikipedia
https://en.wikipedia.org/wiki/Wikipedia#Content_licensing
While Wikipedia is clear that it is not to be used as a primary resource
for academic research (you have been warned!), it is now de facto one
of the first stops for research. In particular, because the materials here
may be updated and corrected much more quickly than printed
sources, Wikipedia articles may be especially useful (at least as a
starting point) for looking into current events, recent changes in a
field, and so forth.
In these ways, Wikipedia – along with other “products” of the FLOSS
movement – serves as a paradigmatic fulfilment of the philosophical
claims and assumptions underlying these alternative approaches to
copyright. In doing so, it provides strong justification for the ethical
frameworks and approaches at work in its licensing schemes. It
thereby serves as an important counterexample to proponents of more
traditional (either US or EU) copyright schemes, especially as such
proponents might argue that FLOSS approaches are somehow
utopian, excessively idealistic, impracticable, etc.
REFLECTION/DISCUSSION/WRITING QUESTIONS: INTELLECTUAL PROPERTY,
ETHICS, AND SOCIAL NETWORKING
We’ve now seen a range of possible approaches to how intellectual
property may be treated:
(i) US, property-oriented copyright law (consequentialist);
(ii) EU, copyright law, oriented toward authorial rights
(deontological);
(iii) Open Source/FLOSS/“copyleft” schemes, including Creative
Commons and GNU General Public (GPL) and Free Documentation
(FDL) licenses.
1. Given your own country/location, which of these licensing schemes
seems to be prevalent in your experience?
2. In your view, what are the most important – but, especially,
ethically relevant – differences between these three approaches? Be
careful here, and, insofar as you are now familiar with one or more of
the ethical frameworks discussed in chapter 6 (beginning with
utilitarianism and deontology), try to discern how far a distinctive
ethical characteristic of a given licensing scheme may be seen to
depend upon a given ethical framework.
3. Return to your responses to one or more of the scenarios introduced
at the start of this chapter – e.g., stealing a CD from a music store,
making an illegal copy of new music for a friend, making your music
library available for others online through a p2p network, etc.
(i) Which of these three approaches to IP seems closest to your own
responses to such scenarios and the ethical justifications for those
responses that you have developed?
(ii) Which of these three approaches to IP most clearly contradicts
your own responses and justifications?
(iii) Develop a summary of the arguments, evidence, and/or other
reasons offered in support for the approaches you have identified in (i)
and (ii). Now: in light of the contrasts here between the arguments,
evidence, etc., can you discern additional arguments, evidence, etc.,
that might support one of these approaches more strongly than the
other?
4. Presuming you have an account on a social networking service such
as Facebook, Instagram, Twitter, Snapchat, and/or others:
(i) When you signed up for the account, did you review the “Terms of
Use” or equivalent legal/ethical agreements required of you as a user
of the site and its affiliated software? If so, why? If not, why not?
(ii) Review the “Terms of Use” for your networking site – looking
particularly for the important claims it makes regarding your
ownership of the materials that you post on the site. (For Facebook
users, the pertinent section of the “Terms of Use” is reproduced at the
beginning of this chapter.)
(iii) Are there claims here that
(a) surprise you, and/or
(b) upon reflection, you may not be comfortable agreeing to?
If so, identify these (both for your own reflection and, perhaps, for
class discussion and further writing).
(iv) Can you discern which of the three approaches to IP that we have
examined are presumed in these claims? If so, is part of your
discomfort with the claims made upon you here because you have a
strong ethical disagreement with the approach to IP presumed here?
That is, can you argue – most easily, from a different ethical approach
to IP – that the claims made upon you are somehow wrong?
(v) Social networking sites depend on acquiring as many user accounts
as possible in order to make money (primarily through advertising, the
sale of at least aggregated information about its users, etc.). In this
way, they are at least somewhat sensitive to the interests, needs, and
opinions of their users.
If you find that the “Terms of Use” of your favorite social networking
site conflict with your own ethics and underlying assumptions
regarding IP, it would be an interesting exercise to write to the site
owners (either individually and/or as a larger group) and explain your
disagreements and reasons for these. If nothing else, their response(s)
to your communications might provide additional material for
interesting ethical analysis!
2. Intellectual property and culture: Confucian ethics and
African thought
As the example of Ubuntu and the differences between US and EU
approaches to copyright suggest, our attitudes and approaches to
matters of intellectual property – specifically, how far, by whom, and
under what circumstances such materials may be justifiably shared –
are strongly shaped by culture. (Keep in mind, of course, the sense and
limitations of any generalizations we may try to make about culture:
see chapter 2, “Interlude,” pp. 49–53.)
As a further example: US copyright law is moderately clear with regard
to what counts as “fair use” for teaching and research purposes – at
least as far as printed materials are concerned.2 In particular, under
most circumstances, it is illegal for me to make, say, photocopies of an
entire book that I would then distribute to my students at the
beginning of the semester for their use during the course. On the other
hand, in the US I would be allowed to place original materials, such as
articles or book chapters, on reserve for my students in the library;
they are then free to check out these materials and make copies of
them – as part of their “fair use” of these materials as students.
By contrast, European copyright law makes no equivalent provisions
for “fair use.” And, on the third hand, in Thailand I received a now
highly cherished gift from some graduate students: a nicely
photocopied version of an important book in philosophy of
technology, complete with a carefully crafted cover, on which the
students inscribed their names. In US circumstances, this could only
be seen as a crass violation of copyright law: in the Thai context, this
copying was seen to be a mark of respect, both for the (famous and
well-known) author of the text and for me as the recipient of the gift.
In the latter case, the gift from the students reflected not simply
relatively limited economic resources – a (consequentialist) reason
often cited as a justification for making illegal copies of materials. In
addition, it reflected the influence of Confucian tradition: as Dan Burk
has summarized it, Confucian tradition emphasizes emulation of
revered classics – and, in this way, copying (as it was for medieval
monks in the West) is an activity that expresses highest respect for the
work of the author (Burk 2007, 101). By the same token, a master
philosopher or thinker is motivated primarily by the desire to benefit
others with his or her work – rather than, say, to profit personally
through the sale of that work – and so she or he would want to see that
work copied and distributed widely rather than restricted in its
distribution. As Peter Yu summarizes, “copying may be an important
living process for a Confucian Chinese to understand human
behaviour, to improve life through self-cultivation and to transmit
knowledge to the posterity” (2012, 4).
In this light, Confucian tradition and practice thus closely resemble
what we have already seen of ubuntu as a (Southern) African cultural
tradition. While, of course, distinct from one another in crucial ways,
they share the sense that individuals are relational beings, ones
centrally interdependent with the larger community for their very
existence and sense of meaning as human beings. Compared with
Western systems emphasizing individuals and the individual’s
exclusive property rights, both Confucian and ubuntu traditions
downplay the importance of the individual and individual interests,
stressing instead the importance of contributing to and maintaining
the harmony and well-being of the larger community. (We will explore
these matters more fully in chapter 6, but it is important to stress here
that this emphasis on the community does not mean – as it sometimes
seems to my Western students – the complete loss of “the individual.”
On the contrary, individual human beings retain significance and
integrity in these views, precisely as they are able to interact with
others in ways that foster community harmony and well-being.)
Hence, whether it is copying and giving an important text out of
respect and gratitude (my Thai students), or making available an OS
such as Ubuntu for free (in more than just the economic sense of being
without cost), in both cases the understanding of property is inclusive:
the right to access and use these materials belongs to the community,
not exclusively to the individual.
In sum, we have now seen culturally variable understandings of
property and the ethics of copying and distribution – initially within
Western cultures (US and European copyright schemes, along with
copyleft schemes affiliated with FLOSS), and now between Western
and non-Western cultures and traditions. In this light, it should now
be clear that the various software operating systems and applications
developed under FLOSS are popular in the developing world not
simply for economic reasons: that is, at least in terms of licensing
arrangements (though not necessarily in terms of technical and
administrative costs), FLOSS avoids the licensing fees charged by
corporations such as Microsoft. In addition, we have seen what we can
properly call the ethos or ethical sensibilities surrounding FLOSS: this
ethos includes an explicit emphasis on one’s contribution to a shared
work for the sake of a larger community. Moreover, this ethos
resonates closely with the emphasis on community well-being that we
have now seen to be characteristic of Confucian tradition and ubuntu,
as but two examples of non-Western philosophical and ethical
traditions.
And, presuming you read chapter 2 before this one, there is a larger
coherency that, I hope, is also becoming clear: just as major cultural
variations regarding our understanding of the individual vis-à-vis the
community shape our conceptions of privacy and expectations
regarding data privacy protection, so these major cultural variations
likewise shape our understandings of property and the ethics of
copying and sharing.
Specifically, recall the discussion there regarding changing
conceptions of selfhood in both “Western” and “Eastern” traditions.
Most briefly, just as strongly individual notions of selfhood correlate
with strongly individual notions of privacy, so it appears that these
notions further undergird and correlate with strongly individual
notions of property – including intellectual property – as primarily an
exclusive right held precisely by the individual as copyright holder.
And, just as more relational notions of selfhood correlate with more
inclusive or shared notions of privacy – such as group privacy or
familial privacy – so these notions, as manifest here especially in
Confucian and ubuntu traditions, further correlate with shared or
inclusive notions of property. In this light, the widespread and largely
accepted practices – however illegal – of file-sharing, especially among
younger folk, does not necessarily mean that there is some sort of rise
of unethical behavior among the youth. And/or it may be that such
behavior further reflects these foundational shifts in our basic
understandings of selfhood and identity – that is, precisely toward
more relational selves for whom such sharing is directly coherent with
more inclusive notions of property grounded in the good of the
community (Ess 2010).
Recall here, as well, the middle ground between these two positions
staked out by notions of the self as a relational autonomy – that is, as
a (more individual) freedom conjoined with relationality as also
essential to our sense of self. It would seem that such a sense of self
coheres especially well with various copyleft schemes of property as
inclusive rather than exclusive. That is, these schemes do not, as we
have seen, abandon the notion of individual property rights altogether
– but rather transform exclusive conceptions to inclusive conceptions
that include shared rights of access by a larger community.
We can also note that these middle-ground conceptions are not
restricted to simply intellectual property or digital materials. Consider
the examples of allemannsretten – “all people’s rights” – in
Norwegian law (as well as in Sweden and elsewhere: Øian et al. 2018,
41). These laws allow “non-owners” the right to “walk through
uncultivated land at any time provided they exercise due care,” with
the same rights applying “to cultivated land in the winter months” –
and this without charge. Specifically, non-owners are allowed to pick
berries, mushrooms, and flowers; and to pitch a tent for up to two
nights, before needing to ask permission of the landowner
([Norwegian] Outdoor Recreation Act, 1957). To be sure, property
owners – farmers, cabin owners, etc. – retain their individual property
rights: in particular, they can charge for more specific activities on
their land, such as hunting. But the protection of public access in these
ways thereby shades these property rights into a more inclusive sort.
These laws thereby recall and to some degree reinstantiate premodern
Western notions of nature and defined lands as “commons,” as
property jointly held by and accessible to all members of a community
(Ess 2016). This is the sense of the “commons” as also invoked in the
“creative commons” licensing schemes.
Readers may recall that, in chapter 1, I warned against the dangers of
either/or thinking (pp. 10, 26–8). These examples of relational
autonomy and allemannsretten are helpful in providing us critical
middle grounds in what it might otherwise be tempting to treat as an
either/or – whether between individual and relational selves and/or,
correlatively, exclusive and inclusive conceptions of property. At the
same time, allemannsretten shows that such middle grounds are not
restricted to digital domains only. Rather, these examples of
commons-like property stand as important real-world examples that
further counter arguments against copyleft schemes as somehow
utopian, excessively idealistic, etc.
REFLECTION/DISCUSSION/WRITING QUESTIONS
1. COPYING: LAW, CULTURE – ETHICS?
Does the legality of copying music make a difference ethically? And
how do our cultural attitudes toward texts, authorship, and property
affect our ethical analyses of copying?
We have now seen a continuum of possible approaches to notions of
intellectual property and the ethics of copying and distributing such
properties. One way to schematize that continuum looks like this:
(Again, these generalizations about culture are starting points only.)
As you review your initial arguments and responses to the questions
concerning copying and distributing copyrighted materials:
(A) Can you now see one or more ways in which your views,
arguments, etc., rested on one or more of the assumptions underlying
these three diverse approaches to intellectual property? That is, how
far (if at all) do any of your views, arguments, etc., rest on:
assumptions about the relative importance of the individual vis-à-
vis the community
and/or
assumptions about the nature of property rights (exclusive or
inclusive)?
If they do, identify the specific assumption(s) at work in your initial
arguments and views.
(B) Does it appear that your relying on these assumptions is related to
your culture(s) of origin and experience? That is, do the assumptions
you’re making regarding either the individual/community relationship
and/or the inclusive/exclusive character of property correlate / not
correlate with these assumptions as characterizing the larger
culture(s) of your origin and experience?
(C) Especially if there is a correlation between the assumption(s)
underlying your views and arguments and the culture(s) of your origin
and experience, what does that mean in terms of ethics? This is to say:
recognizing the role of culturally variable norms, beliefs, practices,
etc., in our ethical arguments characteristically leads to at least two
sorts of questions:
(i) Are our ethical norms, beliefs, practices, etc., ethically relative –
i.e., entirely reducible to the norms, beliefs, practices, etc., of a
particular culture? If so, then we could say, for example:
for persons in a Western culture whose basic assumptions tend to
support individual and exclusive notions of property and thus more
restrictive copyright laws – if those persons violate more restrictive
copyright laws (e.g., through illegally copying and distributing music),
they thereby violate the basic ethical norms of their culture and should
be condemned as wrong; but:
for persons in, say, a Confucian culture whose basic assumptions tend
to support more community-oriented, inclusive notions of property
and thus less restrictive copyright laws – if those persons violate the
more restrictive copyright laws of Western nations, they are thereby
simply following the moral norms and practices of their culture, and
should not be condemned as wrong.
Consider/discuss/write: Does this approach of ethical relativism to
the sorts of differences we have seen “make sense” to you as a way of
how we are to understand and respond to these deep differences
between cultures? If so, explain why. If not, why not?
(ii) If you do think there’s something mistaken about the above
scenario – and, thereby, about ethical relativism – then additional
questions arise:
(a) Do you want to shift to a posture of ethical absolutism
– claiming that the norms, beliefs, and practices of country/culture X
are the right ones: those countries/cultures/individuals who hold
different norms and beliefs are thereby wrong?
and/or
(b) Do you think it’s possible – as we saw in chapter 2 on privacy – to
develop an approach to matters of copying and distributing digital
media that works as an ethical pluralism?
As a reminder: ethical pluralism conjoins shared norms or values with
diverse interpretations/applications/understandings of those norms
and values – so as thereby to reflect precisely the often very different
basic assumptions and beliefs that define different cultural and ethical
traditions.
Consider/discuss/write: given what we’ve seen regarding the current
conflicts between US and European approaches to copyright law
(above, pp. 100–1), do these conflicts point toward an ethically
absolutist approach on the part of the different countries engaged in
these conflicts? And/or: in light of those conflicts, do you see any
possibility of an ethically pluralistic solution emerging?
(D) If you find that your beliefs, norms, and practices do not correlate
with those underlying the culture(s) of your origin and experience,
why might this be the case?
Are we – especially in terms of our ethical sensibilities – somehow
capable of discerning and establishing moral norms apart from,
perhaps even against, prevailing norms and assumptions of our
culture(s) of origin and experience? If so, how does that “work” in your
view? That is, how do we as human beings come to develop our own
ethical sensibilities? On what grounds?
2. COPYRIGHT: DIFFERENT ETHICS FOR DIFFERENT COUNTRIES, CULTURES?
A student from a developing country justified the practice of pirating
in that country – of illegally copying and selling imported music CDs –
under the conditions that they were:
(a) the work of well-to-do (and primarily Western) artists, and
(b) distributed and sold in that country by equally well-to-do
multinational corporations.
The student justified the practice of pirating in an interesting way:
(i) The widespread practice of pirating – of illegal copying and selling
– imported music CDs effected an interesting change.
Originally, imported CDs cost around US$10.00. Pirated CDs were
being sold for US$1.00. But, after a certain period of time, the prices of
legal, imported CDs dropped to US$2.00 – thereby making them
much more affordable for that country’s inhabitants, and thus
allowing the multinational corporation and Western artist to make at
least more profit than they had before. This is to say: illegal copying
and sales of CDs in effect broke a market monopoly, so that the market
forces worked as they are supposed to – i.e., with free(r) competition
leading to lower prices.
In addition, the student pointed out that, by contrast, many students
and others of limited means consciously choose to pay full price for a
CD produced by a local/regional/national music group. Again, the
argument is, on first blush, utilitarian:
(ii) By paying full price for CDs produced by local/regional/national
musicians, they thereby supported those who really needed it – and
thereby helped boost their own economy.
In both examples, the student’s arguments echo the arguments I hear
from many students in the developed world. Again, in the case of a
nationally or internationally known musician whose work is
distributed by wealthy and powerful corporations, the positive benefits
or consequences of illegal copying and downloading (in terms of
making the music more easily available for more people) outweigh the
possible negative costs (of a modest amount of lost profit to the
musicians and the companies). By contrast, many will make a
conscious effort to “buy local” – to pay full price for CDs produced and
distributed by local bands struggling to make a start.
Responses? In particular:
(i) Does it seem to you that, say, students and others in developing
countries can make a greater/stronger case for pirating and other
forms of illegal copying than students and others in developed
countries?
(ii) Assume that the developing country in this example is a country
marked by one of the more community-oriented traditions discussed
above – for example, ubuntu or Confucian thought. And assume that
the students in the developed world that I refer to live in the well-to-do
countries such as the United States and Scandinavia – that is,
countries and traditions shaped by Western conceptions of the
individual and primarily exclusive property rights.
In light of the important differences between the cultural and ethical
backgrounds, how do you respond to the claim that the students in the
developing country (shaped by ubuntu or Confucian tradition) have a
stronger justification for their illegal copying than Western students?
Or would you rather argue that everyone should follow the copyright
laws – no matter what their location and culture?
3. COPYRIGHT AND DEONTOLOGICAL ETHICS
Deontological ethics, as emphasizing, for example, duties to respect
and protect the rights of others – whatever the costs of doing so – can
be invoked in these debates as offering reasons for obeying the law
(e.g., RIAA – www.riaa.com/resources-learning/about-piracy). Even if
the consequences of doing so may be unpleasant – e.g., not having
access to music one would otherwise enjoy – doing so nonetheless
reflects an important duty to respect the property rights of others.
Such duties, however, crucially depend on establishing that the laws in
question are just laws – that is, grounded in one or more sets of values
and principles that are used to demonstrate that such laws are justified
as means to higher ends.
And so, Mahatma Gandhi and Dr. Martin Luther King, Jr. (not to
mention, the signers of the US Declaration of Independence) famously
argued that, while we are morally obliged to follow just laws, we are
allowed, even morally obliged, to disobey unjust laws.
The trick, of course, is demonstrating that a given law is indeed unjust.
Some arguments I’ve heard in the debates over illegal copying sound
as though people are attempting to construct a deontological argument
along the following lines:
The laws established to “protect” the work of wealthy artists and
marketed by wealthy and powerful corporations are unjust.
They are unjust because the laws are not the result of a genuinely
democratic process, one in which the consent of those affected
plays the deciding role. Rather, they are laws that result from a
legislative process controlled by the powerful – those with the
money to do so. Those laws thus represent and protect the interests
of the wealthy and powerful – they do not represent or protect the
interests of the rest of us.
Given that these laws are unjust, I am allowed (perhaps even
obliged) to disobey them.
Perhaps with the help of your instructor and/or cohorts, review some
of the important deontological sources for arguments supporting
disobeying unjust laws (King [1963] 1964); cf. Brownlee 2017). And/or
review broader critiques of, especially, the US copyright system and
dominance from the perspectives of indigenous peoples and
developing countries (e.g., Boateng 2011). Can you find/develop
deontological arguments along these lines that support disobeying
prevailing copyright laws as unjust laws? And, if so, how closely do
they parallel the sorts of arguments offered by Dr. Martin Luther King,
Jr., for example? In particular, how good an analogy is there between:
the situation and context supporting King’s arguments that
segregation laws are unjust – and thus must be disobeyed; and
the situation and context supporting the arguments you
find/develop showing that copyright laws are unjust and thus can
or must be disobeyed?
(It may be helpful to also review the discussion of analogy arguments
above, pp. 97–8).
4. COPYRIGHT AND VIRTUE ETHICS
Herman Tavani (2013) develops a framework for analyzing intellectual
property issues that rests squarely on Aristotle’s virtue ethics (see
chapter 6). On this view, information is taken to have as its ultimate
purpose both personal expression and utility; this further means that
information is best understood as a common good, something to be
shared – rather than treated as an exclusive property (as in the US and
EU, as we have seen). At the extreme, a focus on information –
whether as computer software or a popular song – as an exclusive
property, the right to which can be controlled by one person or
corporation, would lead to the end of “the public domain” – that is, a
kind of “information commons” that benefits the whole community.
(The analogy here is with the commons in preindustrial England, a
parcel of land as inclusive property that is accessible to all for the
benefit of all, in contrast to individual and exclusive private property.)
Arguably, much good – both individually and communally – has come
from the existence of such commons. Indeed, as Niels Ole Finneman
(2005) has documented, part of the Scandinavian approach to
information technologies and their supporting infrastructures is based
on understanding these as common or public goods – ones that thus
require and deserve the material support of the state. Direct state
support of ICT infrastructure and development has thus contributed to
the Scandinavian countries enjoying the highest presence and use of
these technologies in their daily lives. (This approach obviously
directly resonates with allemannsretten as well.)
From the perspective of virtue ethics, then, we would pursue
excellence in our abilities to develop, manipulate, and distribute
information as a common good – not primarily because doing so will
benefit us personally in primarily economic terms; but rather, because,
in doing so, (a) we foster and improve upon important capacities and
abilities as human beings, including our ability to communicate with
one another and benefit one another using these new technologies;
and (b) doing so thereby contributes to greater community harmony
and benefit. (Cf. Peter Yu’s account of copying as “an important living
process for a Confucian Chinese to understand human behaviour, to
improve life through self-cultivation and to transmit knowledge to the
posterity” [2012, 4])
Tavani emphasizes that this approach is not opposed to individual
economic gain. The ideal here would be to develop a system that could
conjoin these notions of virtue ethics and the common good with a
recognized need for “fair compensation” for the costs and risks
individuals and companies take in developing products and making
them available in the marketplace. Tavani sees the Creative Commons
initiative (discussed above) as one way of institutionalizing such a
virtue ethics approach to information (Tavani 2013, 252–60; cf. Ess
2016).
Responses? In particular:
(A) Are there important virtues or habits of excellence that might
come into play in either:
(i) practicing obeying, for example, copyright laws (as well as other
laws), at least as long as they are just laws?
(ii) practicing disobeying such laws?
(B) Are there important virtues or habits of excellence that might come
into play in either:
(i) practicing obeying, for example, copyright laws (as well as other
laws), even if they are unjust laws?
(ii) practicing disobeying such laws?
5. CULTURE – AGAIN
(Remember: the following generalizations are heuristic starting points
only. There will be plenty of counterexamples, nuances, and greater
complexities as we go along.)
In addition to culture correlating with basic assumptions regarding the
individual/community relationship and the nature of property rights
(inclusive/exclusive), we have seen that it may further correlate with
the basic ethical frameworks we have been using:
Roughly, if you have been acculturated in a Western/Northern
country such as the US and the UK, it may be that your arguments
largely emphasize utilitarian approaches.
If you have been acculturated in a Western/Northern country such
as the Germanic countries and Scandinavia, it may be that your
arguments more likely include deontological approaches.
If you have been acculturated in a non-Western country –
especially one shaped by the sorts of traditions we have explored so
far (ubuntu, Confucian thought, and Buddhist thought) that
emphasize the well-being of the community, you may have a
stronger likelihood of appreciating virtue ethics approaches – i.e.,
beginning with questions about what kinds of human beings we
need to become – and thus what sorts of habits and practices of
excellence we must pursue, for the sake of both our own
contentment and well-being (eudaimonia) and that of our larger
community; and/or you may have a stronger likelihood of
appreciating the importance of doing what will benefit the larger
community in any event, insofar as we as individuals are crucially
interdependent with the other members of our community.(Similar
comments may also hold for those acculturated in Scandinavian
countries, as marked by strong traditions of shared public goods,
as exemplified in allemannsretten and social democratic
approaches to public infrastructure, including ICTs and the
internet.)
What role – if any, so far as you can tell – does your own culture play
in shaping your attitudes, beliefs, and practices in these matters?
Stated differently: can you see whether or not your own arguments
have been reinforced in one or more ways by the larger cultural
tradition(s) that have shaped you? And/or do your own arguments
tend to run against the prevailing ethics of the larger cultural
traditions that have shaped you?
(After responding to these questions, you may want to revisit the
questions regarding our meta-ethical frameworks – ethical relativism,
absolutism, and pluralism – raised above in questions (1)(C)(i) and (1)
(C)(ii), pp. 118–19.)
Notes
1 For non-geeks: the operating system, or OS, is the base-level
software required to make your computer “work” – including
reading and writing files from various media (CDs, DVDs, memory
sticks, hard drives) and through various communication channels
and networks (phone lines, Ethernet connection, wireless
networks), along with the many operations required to let you
interact with and use that information (e.g., keyboards and mice
and the computer screen). Application software, by contrast, is
software that runs, so to speak, on top of the OS: this commonly
includes applications for wordprocessing, email, spreadsheets,
presentation, web-browsing, instant messaging, etc.
2 The US Copyright Act of 1976 was accompanied by the
development of “Guidelines for Classroom Copying in Not-for-
Profit Educational Institutions with Respect to Books and
Periodicals” (www.copyright.gov/circs/circ21 ). In the US,
different universities are establishing guidelines and background
materials for guiding students and faculty in applying fair use
principles to digital materials (e.g.,
https://ogc.harvard.edu/pages/copyright-and-fair-use). But,
especially from an international perspective, far from any sort of
consensus emerging, discussion appears to be in flux, and policy-
making and legislation even more so (e.g., Hick and Schmücker
2016).
http://www.copyright.gov/circs/circ21
https://ogc.harvard.edu/pages/copyright-and-fair-use
CHAPTER FOUR
Friendship, Death Online, Slow/Fair
Technology, and Democracy
Where the funeral used to be the primary ritual space of social
mourning expressions, now social media networks offer an expansion
of sociality (multiple social milieus), spatiality (multiple spaces) and
temporality (multiple timeframes). … mourning etiquette is both
challenging the online social scene as well as being redefined by it.
(Sabra 2017, 25)
[S]ocial media space is not a replacement for physical space in the
making of contemporary social movements. … Isolated from other
networks of communications and media, social media cannot make a
revolution.
(Lim 2018, 128)
Chapter overview
We explore four aspects of life online that offer remarkable new
possibilities in our personal and shared lives while also confronting us
with new ethical questions and challenges. We examine first how
friendship is both amplified and threatened by social networking sites
(SNSs) such as Facebook. A virtue ethics approach both raises serious
ethical questions and offers helpful suggestions for resolving those
questions.
We then take up the recent phenomena of “death online,” the
emerging practices of announcing, grieving, and memorializing the
death of those close – and those not so close – to us. The collapse of
the divide between traditionally private rituals and grieving vis-à-vis
the largely public venues of online communication evokes new ethical
questions – as do our “digital legacies” we leave behind, from online
profiles to our mobile devices.
These recent phenomena highlight ways in which especially young
people are moving away from Facebook – and toward a “post-digital”
era indexed by greater emphasis on the importance of our offline
worlds, for example when grappling with deepest friendship and grief.
Resonant with these developments are growing interests in “slow
technology” and Fairtrade commitments as shaping the very design of
our technologies. We will explore these specifically by way of the
Fairphone as a case-study.
Lastly, we address our lives as citizens in – hopefully – democratic
societies. Early confidence in the democratizing powers of digital
media has been severely countered by the collapse of the Arab Springs
into the Arab Winters. “Fake news” and the role of social media in
fragmenting and polarizing democratic publics are additional ways of
exploiting online communication that foster the global rise of “digital
authoritarianism.” These darker developments are (somewhat)
countered by recent uses of deontological and virtue ethics
approaches.
Friendship online? Initial considerations
At the time of this writing, the SNS Facebook (FB) claims that over 2.7
billion people – about 37 percent of the planet’s population – use the
FB services of Facebook, Instagram, WhatsApp, or Messenger
(Facebook 2019, 1). Such staggering numbers are but one marker of
the explosive growth of SNSs over their nearly two decades of
existence. As Shannon Vallor notes, our use of these sites is “reshaping
how human beings initiate and/or maintain virtually every type of
ethically significant social bond or role,” beginning with friendship but
extending through “parent-to-child, co-worker-to-co-worker,
employer-to-employee, teacher-to-student, neighbor-to-neighbor,
seller-to-buyer, and doctor-to-patient” relationships – and this is
simply “a partial list” (2016a, 1)
On the one hand, the boons of connecting with one another through
such sites are undeniable. Especially in highly mobile societies such as
the United States, SNSs allow friends and family who have moved
apart to remain in touch in emotionally invaluable ways. Multiple
organizations – from student groups to religious organizations – have
exploited the affordances of SNSs to bring together likeminded
members and attract potential new ones (e.g., Lomborg and Ess 2012).
Indeed, despite the recent scandals and concerns surrounding
Facebook, having a Facebook page for one’s business, political party,
the local neighborhood improvement group, major (and minor) civil
projects, etc., remains essential for communicating within such groups
and publicizing to larger communities. Academics are likewise
expected to polish their “social media presence” – e.g., a profile on
LinkedIn, Academia, and/or Research Gate. These services are
certainly useful for making new connections and perhaps gaining the
attention of “head-hunters” tasked with recruiting people to specific
positions. They are also becoming increasingly essential venues for
exchanging both pre-publication and post-publication journal articles
and book chapters – and thereby gaining still greater attention for
one’s own work. “To be is to be seen” – on social media.
Nonetheless, younger people in particular have been abandoning
Facebook for several years – first of all, because their parents and
other relatives have also joined. But other SNSs, such as Snapchat and
Instagram, have flourished as communication channels that are more
temporary and more easily secluded from their parents’ and other
adult eyes. In this post-digital era, more and more people work to
reduce their online engagements and time spent before screens
(Syvertsen and Enli 2019). Nevertheless, it remains essential,
especially for young people, to remain connected to their peers via
SNSs (e.g., Lüders 2011).
At the same time, both individual and group privacies can be at risk in
such sites. As well, SNSs raise the larger problem of self-
commodification. That is, such sites give us relatively narrow
categories for self-presentation, beginning with a binary choice
regarding gender. More generally, the strong tendency is to give users
categories having to do with our preferences as consumers (“music,
movies, fandom”; Livingstone 2011a, 354). And this is just the
beginning of commodification: especially in an era of Big Data and
what Jodi Dean (2009) calls “communicative capitalism,” the data
collected about us – from browsing history to entertainment choices to
credit card use – are the primary commodities exchanged between
such sites and advertisers who seek to micro-target us as consumers.
Using increasingly sophisticated “persuasive technologies,” SNSs’ (and
other) design aims to maximize our time and “click-throughs” – for
the sake of more data, advertising, and revenue. Finally, as we saw in
the classic example of Amanda Todd (chapter 1), there are ongoing
cases in which “friendship” online can be used as a vehicle for
cyberbullying of various forms – including forms severe enough to
lead to suicide.
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS
1. Develop a utilitarian cost–benefit analysis of your own use of SNSs,
whether Facebook, Instagram, Snapchat, Twitter … and/or more
professionally oriented sites such as LinkedIn, Academia,
ResearchGate … and/or more
locally/regionally/nationally/internationally oriented sites such as …
To do so:
(A) Develop an informal “media log” in which you document your own
uses of SNSs over some period of days – e.g., a week or a month. Try to
be as careful and precise as possible in your documentation. In
addition to noting, say, “I checked my SNS profile 20 times today,” list
carefully just what you did as you did so – e.g., looked at a friend’s
profile page, commented on a photograph or other comment, checked
out a “person of interest” in your class, etc. The idea is to provide as
rich and fine-grained a picture of your media use as possible with a
view toward responding to the second part of this exercise – namely:
as you do so, what are, for you, some of the most important benefits of
your using the site(s)?
(Yes, you can automate this process with the increasing offering of
tools – often provided by the companies themselves, such as Apple’s
“Screen Time” – ostensibly designed to help us keep track of and
reduce our screen time, etc. Using such tools may be helpful checks on
your manual logging – but the manual log will force you to be more
conscious about the details in ways that should prove helpful to this
exercise.)
(B) Have you (and/or your cohorts, friends, and/or family) ever had
any negative experience(s) in using SNSs? If so, describe these with
some care, being sure to explain why these experiences were negative.
That is, did they result in hurt feelings, feelings of betrayal, lack of
privacy, loss of trust, loss of “face” among your friends and family,
serious sorts of financial cost or fraud … ? (You may also want to
consider some of the effects discussed below regarding “death
online.”)
(C) Either individually and/or in a group, line up your positive
experiences (and their approximate “utils”1) in one column, vis-à-vis
negative experiences (and their approximate utils) in an adjacent
column. You can then develop a continuum of possible ethical
responses to the benefits and risks – for example, ranging from a
complete abstinence from SNSs (because the risks of possible harms
are too high) to a moderate use of SNSs (as guided by careful
consideration of how to avoid known risks) to a complete embrace of
SNSs (on the view that the possibilities of serious harm are very low
and are outweighed by a clear set of benefits).
In light of your experiences, your columns of risks and benefits, and
the continuum of possible responses you develop, which response(s)
to SNSs and their possible uses would your utilitarian calculus
recommend?
As always, the chief question for our purposes is, why? That is,
whatever your response to this question, what reasons, arguments,
grounds, feelings, intuitions, sensibilities, etc., support and provide
justification for your position?
2. Deontologists would approach SNSs from the perspective of basic
rights – beginning with rights to privacy, but also rights, for example,
to the intellectual property (IP – as distinct from IP [Internet
Protocol] addresses) uploaded and created on a profile (e.g., a
photograph).
Review the Terms of Service (ToS) and privacy policies of the SNS you
use primarily. This will take a while: not only are they
characteristically very long – in many cases, they are intentionally
written to be difficult to read and understand, in order to encourage
our “clicking through” the consent box. What rights do these
documents indicate are in fact protected – and/or what rights seem to
be only moderately protected, if at all?
In light of this review – and referring to the continuum of possible
responses or uses of SNSs you developed above (1.C) – how would a
strict deontologist – that is, one insisting upon basic rights to privacy
and (perhaps) IP – respond to the SNS’s ToS and privacy policies?
That is, if the primary issue is to preserve these rights at all costs,
where would a strict deontologist likely stand on the continuum of
responses you have developed?
3. We have begun to see that a number of researchers and ethicists
have long raised further questions about how SNSs involve self-
commodification – turning aspects of our identity and selfhood into
commodities or saleable products in a marketplace. On a first level, the
focus is on how such sites require us to present ourselves in terms of
our consumer tastes, such as music preferences, etc. (Livingstone
2011a). On a second level, the data we provide – both in setting up a
profile and then the additional data generated by our further use of an
SNS – constitute the economic bread and butter of the sites’ owners:
this information, when aggregated with that of many, many others, is
sold to various corporations and businesses who seek to advertise their
goods and services more effectively. On both levels, the design of the
SNS foregrounds those aspects of our selfhood and identity that: (a)
can be appropriately captured in the categories of consumer
preferences; and thereby (b) prove highly valuable for marketing and
advertising purposes.
Up to a point (as may be apparent in your initial utilitarian analyses),
such self-commodification may be perfectly useful and benign. But
deontologists would further raise the question: is there a point in these
processes when our focus on self-commodification risks having us lose
sight of our primary ethical identity as moral autonomies – i.e., as
freedoms- and rights-holders who must not be reduced to
commodities simply for sale in a marketplace?
Again, in light of this ethical focus – where would a strict deontologist
likely stand on the continuum of possible responses you developed
(1.C)?
4. Shannon Vallor notes in her article on “Social Networking and
Ethics” (https://plato.stanford.edu/archives/win2016/entries/ethics-
social-networking) that the ethical implications of SNSs are not
“strictly interpersonal”: in addition, our engagement with SNSs
implicates us in a “complex web of interactions between social
networking service users and their online and offline communities,
social network developers, corporations, governments and other
institutions” (2016a: 1).
In slightly different terms, this means that our engagement with SNSs
inextricably ties us (including our ethical agency and moral choices) in
with an extensive “web of relationships” that extends across the whole
range of actors and agents (including artificial agents) knotted
together by these networks. This would further seem to mean that our
ethical choices and responsibilities are thereby “distributed” or
likewise shared across such networks. In fact, philosophers such as
Luciano Floridi (2006) and Judith Simon (2015) argue that we must
consider carefully the implications of such distributed and shared
https://plato.stanford.edu/archives/win2016/entries/ethics-social-networking
responsibility – i.e., beyond more traditional emphases on our
individual responsibility – as an inevitable dimension of our lives as
enmeshed within such networks.
If you did not already take into account the distributed nature of our
ethical responsibilities in your first responses to the above questions,
take some time to reflect on that now. In particular:
Does the distributed nature of ethical responsibility change any of
your utilitarian calculations and/or decisions/judgments regarding
what utilitarians would ethically recommend in terms of the
continuum of possible engagements with SNSs (including no use at
all)?
Does the distributed nature of ethical responsibility change any of
your deontological analyses and/or decisions/judgments regarding
what deontologists would ethically recommend in terms of the
continuum of possible engagements with SNSs (including no use at
all)?
(We will explore these questions in more concrete detail by way of the
Fairphone case-study below.)
Finally:
(A) Are there significant differences between the utilitarian and
deontological responses or judgments regarding possible uses of SNSs
(including no use at all)?
For example, you may find that deontologists, as concerned with
privacy and IP rights, as well as insisting that human freedom must
not be eliminated through commodification processes, would weigh in
more on the side of moderate to no use of SNSs. Utilitarians, by
contrast, might well argue more in favor of moderate use to a full
embrace of SNSs.
(B) If there are differences, which responses come closer to your own
current use and ethical sensibilities? That is, do you find your uses and
ethical judgments agreeing more with the utilitarians or more with the
deontologists?
(C) Either way, can you provide arguments, evidence, and/or some
other form of warrant that would argue in favor of your taking up
either the utilitarian or the deontological approach here?
(D) Keep in mind that these preferences tend to be strongly shaped by
our national and cultural backgrounds – for instance, with
utilitarianism tending to be stronger in English-speaking countries,
while deontologies, for example, tend to be stronger in especially
Northern European countries. In this light, do your preferences for
either utilitarianism or deontology “line up” with your
national/cultural background?2
If so, do you have additional arguments, evidence, and/or some other
form of warrant that would suggest that your preferences are not
simply the result of your background and enculturation? If not, can
you point to specific experiences, arguments, etc., that have
encouraged you to take up an ethical framework perhaps somewhat at
odds with those prevailing in your country/culture of origin?
Friendship online: Additional considerations
As discussed more fully in chapter 6 (pp. 260–6), virtue ethics
provides a third framework for analyzing and resolving ethical issues.
For many reasons, virtue ethics has become increasingly significant in
the digital age. To begin with, as the above exercise may suggest,
neither utilitarianism nor deontology alone may always “work” to help
us resolve some of the ethical issues occasioned by digital technologies
and networked communications. For example, both utilitarianism and
deontology tend to emphasize the ethical responsibilities of human
beings understood as primarily individual moral agents. But, as we
have seen (esp. chapter 2, pp. 60–8), our sense of selfhood in
“Western” societies appears to be shifting from more individual
emphases toward more relational emphases – and thereby toward
more relational forms of shared or group privacy, as well as the
distributed responsibility explored above. In both its ancient and
contemporary forms, virtue ethics begins precisely with the view that
human beings are also such relational or social beings, not solely
individual ones. Hence, virtue ethics is especially well suited to serve
as an ethical framework in the (post-)digital age, insofar as digital
media and networked communications incline us in more sociable or
relational directions. So we will see in chapter 5, for example, that
virtue ethics approaches become increasingly useful in our efforts to
respond to some of the ethical challenges clustering about sexually
explicit materials (SEMs) online and violence in games, as well as
robots and sexbots. And in chapter 6, we will further see how virtue
ethics has become central to both European and international efforts
to define an “ethically aligned design” for Artificial and Intelligent
Systems – that is, by the IEEE (Institute of Electrical and Electronics
Engineers), the largest professional and standards-setting
organization in the world (https://ethicsinaction.ieee.org).
In fact, virtue ethics emerged early on as a primary approach to the
ethics of friendship online. This is perhaps not surprising: recall the
guiding questions of virtue ethics – namely, what capacities or habits
must I acquire, practice, and develop with excellence in order to enjoy
a life of contentment (or happiness – eudaimonia)? Again, for both
ancient and contemporary virtue ethicists, such a life is always a
relational or social one: hence, our sense of contentment or well-being
(eudaimonia) is inevitably interdependent upon our relationships with
others. Friendship is among the most important of such relationships:
it is difficult to imagine a good life of flourishing and contentment
without it, and so friendship is a primary focus of virtue ethics.
In these directions, Vallor (2009, 2011, 2016a, 2016b) has carefully
examined how far SNSs may foster and/or hinder the capacities and
habits (virtues) required for developing and sustaining deep
friendships. The question then becomes: how far do our engagements
with SNSs incline us to acquire and foster the virtues required for
friendship – and how far might the very designs of SNSs instead
discourage our acquiring and fostering these virtues?
Vallor focuses on the virtues of patience, perseverance, and empathy
as requirements for deep, long-term friendships – as well as long-term
intimate relationships, and, indeed, communication itself. We once
learned these virtues in non-digital settings – say, a visit to an elderly
relative when we were young. With no possibility of escape, we
gradually learned how to engage with such Others, beginning with
simple conversation. Such engagements require, and thus cultivate,
https://ethicsinaction.ieee.org
precisely the virtues of patience and perseverance. These capacities
are essential to sustaining most human projects, including those of
close relationships:
In communication, perseverance manifests the willingness to push
through conflict or misunderstanding to reconnect with one’s partner
on the other side of the breach. But to be effective in maintaining the
intimacy of the communication, such perseverance must be coupled
with patience, the habit of “riding out” moments of irritation,
boredom, or incomprehension rather than tuning out or abruptly
changing the subject in an attempt to force the conversation into a
more satisfactory state. Indeed, the richest joys of communication
often come from being patient enough to actually grasp what is being
said, to finally get the joke, or to hear a challenging truth.
(Vallor 2009, 165)
To be sure, acquiring such capacities and habits is not easy,
especially in the beginning when our existing motivations and
dispositions often pull us in the opposite direction. One therefore
requires, in addition to our existing motives, situational opportunities
that exert some pressure upon us to move in the virtuous direction,
and the social strains and burdens of face-to-face conversation have
historically, and across cultures, often been rich sources of such
pressure.
(Ibid.)
Whatever their many benefits and advantages may be (especially from
a utilitarian perspective), the general question is then: how far do our
engagements with one another online foster and/or hinder the
development of such core virtues as patience and perseverance?
Overall, Vallor’s first response is not heartening:
For today’s technologies provide us with an ever-widening horizon of
escape routes from any interaction that has lost its momentary appeal,
and are widely celebrated by users precisely for their capacity to
liberate us from the uncomfortable strains and burdens of
conventional communication. I can … click away from a friend’s blog,
without the price that must be paid for physically turning away from a
face-to-face conversation.
(Ibid., 166)
This is to say: as SNSs are built around online engagements that are
quick, short, convenient, and ephemeral (as Snapchat and Instagram
are specifically designed to be), they thereby train communicative
habits that do not immediately seem to require the sorts of
perseverance and patience characteristic of at least some of our offline
encounters, especially our most significant ones. In particular, online
communicative environments always offer the possibility of an
immediate escape, as mood, desire, and/ or necessity may dictate. This
is not necessarily problematic ethically. The ethical concern, rather, is
with how far I am likely also to stick with the offline engagements that
require virtues such as patience and perseverance – and thereby
acquire and learn how to foster those virtues.
To get at this a last way, Vallor emphasizes that in offline venues:
The gaze of the morally significant other, which holds me respectfully
in place and solicits my ongoing patience, is a critical element in my
moral development; though I might for all that ignore it, it creates an
important situational gradient in the virtuous direction.
(Ibid.)
In online contexts, however, it is easy to escape such a gaze. Such
escape is not always a bad thing: sometimes it might well be beneficent
and fully justified, as when we focus on our mobile devices while
commuting, etc. Rather, Vallor is raising the larger question: what
sorts of habits and excellences are we likely to acquire as such online
environments become our predominant venues for communication
with one another?
Again, acquiring such virtues is difficult at the beginning – especially,
it would seem, for a young person both as a beginner in these virtues
and as someone whose communicative engagements increasingly take
place in online rather than offline environments. That is: if we are at
the beginning stages of learning to acquire such virtues – precisely
because it is challenging and difficult to do so – it is especially
tempting to quit as soon as possible. (By analogy: think about how
young people fight against the sorts of practice required to become
competent musicians or athletes, much less excellent ones, for
example.) If the vast majority of our communicative engagements with
one another take place primarily in online contexts, are we likely to
acquire and foster the virtues of perseverance and patience? And/or is
it more likely that, when we are forced in offline contexts to take up
the difficult practices of patience and perseverance, we will rather seek
to return as quickly as possible to the relative familiarity of the
comparatively less demanding online environments?
ADDITIONAL REFLECTION/DISCUSSION/WRITING QUESTIONS
1. Take the opening question of virtue ethics seriously: what habits or
capacities (as abilities that must be first learned and then practiced –
as a musician must practice her scales, as an athlete must practice the
moves of the game, etc.) seem necessary to you and your cohorts for a
life of contentment and well-being – both for yourself and for the
communities in which you are inevitably interwoven? (Keep in mind
here that such contentment is not solely a matter of “subjective”
satisfaction, but further includes a more “objective” sense of having
the skills – as acquired and practiced virtues – that allow us to act well
and flourish in our larger communities and environments.)
2. In particular, how far do you dis/agree with Vallor’s first claim that
the virtues of patience and perseverance are required for developing
deep and long-term friendships? If you agree, why? If not, why not?
3. Return to your media log developed for question 1.A in the opening
set of reflection questions. Choose the online venues or environments
that your log indicates you use most often.
(A) Carefully reflect upon and consider your usages of these
environments from a virtue ethics perspective: what habits or
capabilities do these environments incline us to practice most often?
(B) Given your responses to 3.A, how far do the habits or capabilities
most practiced in the online environments you use the most overlap
with and/or differ from:
(i) your own list of virtues; and
(ii) the virtues identified as central by Vallor, namely perseverance,
patience, and empathy?
(C) Recall the continuum of possible responses to SNSs developed
above, ranging from no use to enthusiastic embrace. Given your
responses to 3.B, as a virtue ethicist, where on this continuum would
you argue you should stand? That is, which point on the continuum
seems most likely to help you acquire and foster the virtues you have
identified as necessary for a life of contentment or well-being
(eudaimonia)? (Again, keep in mind here that such flourishing is
dependent on virtues as skills and abilities needed for acting well in
community with others and our larger environments.)
As always, what counts here are your arguments and evidence.
4. Now that you’ve developed a virtue ethics response to SNSs and
their possible uses, how do your judgments and conclusions compare
with those developed above using utilitarian and deontological
frameworks?
5. Especially if virtue ethics lands you in a different place on your
continuum than either utilitarianism and/or deontology:
(A) Which of these positions – utilitarianism, deontology, and/ or
virtue ethics – lands you most closely to the point on the continuum
that most closely coheres with your current actual practices and
usages?
(B) Given your response to 5.A – i.e., are your current practices and
usages best recommended from a utilitarian, deontological, and/or
virtue ethics standpoint – is the resulting ethical standpoint consistent
with the framework(s) you have found yourself most closely allied with
above and/or in other exercises in this book?
(C) Especially if you find yourself moving between ethical frameworks,
rest assured, first of all, that this is perfectly normal. It may well be
that each framework “works” better than another in the face of a
particular ethical context or challenge: indeed, part of learning about
ethics is just the (hard) work of learning how to judge which
framework(s) are best used when and where.
That said, does it appear that there might be some basic
inconsistencies or incoherencies in how you approach these ethical
issues? For example, is your choice of one framework in one context
warranted or justified in a way you can articulate and defend – or is it
possibly more the result of, say, your national/cultural background or
other factors you’ve previously not considered?
In all events – do you start to see ways of developing a more coherent
use of these frameworks and approaches? And, if so: would doing so
result in any changes in your actual practices and usages of SNSs?
ADDITIONAL RESOURCES: EVIDENCE, DESIGNER REGRETS, AND MOVES
TOWARD THE POST-DIGITAL
In the earlier years of social media, debates about their ethical
dimensions and concerns took place in the absence of reasonably
reliable empirical evidence regarding their actual impacts and
consequences. While such research will remain limited and qualified
in complex ways, nonetheless, more reasonably solid findings have
begun to emerge in the last few years. One of these –
Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew
Gentzkow (2019) The Welfare Effects of Social Media,
web.stanford.edu/~gentzkow/facebook
describes the costs and benefits for US-based Facebook users who
withdrew from FB for a period of four weeks in the fall of 2018. Some
of the important highlights are summarized and discussed in:
Benedict Carey (2019) This Is Your Brain Off Facebook, New York
Times, January 30, www.nytimes.com/2019/01/30/health/facebook-
psychology-health.html.
This evidence is consonant with a larger wave of growing criticism of
these technologies and the companies behind them – specifically by
(former) employees and designers who have come forth with dramatic
accounts of their regrets for having been involved in their design and
related processes. So Justin Rosenstein, the coder who developed the
“like” button in the first place, has emerged as a prominent critic of
“social media and other addictive technologies” (Lewis 2017).
http://web.stanford.edu/~gentzkow/facebook
http://www.nytimes.com/2019/01/30/health/facebook-psychology-health.html
Rosenstein is by no means alone:
Paul Lewis (2017) “Our Minds Can Be Hijacked”: The Tech Insiders
Who Fear a Smartphone Dystopia, The Guardian (October 6),
www.theguardian.com/technology/2017/oct/05/smartphone-
addiction-silicon-valley-dystopia.
On the other side, burgeoning research on “digital detox” – increasing
efforts to reduce and/or disconnect from our screens – documents
numerous strategies and approaches toward better balancing our
online and offline lives, e.g.:
Trine Syvertsen and Gunn Enli (2019) Digital Detox: Media Resistance
and the Promise of Authenticity, Convergence: The International
Journal of Research into New Media Technologies: 1–15. DOI:
10.1177/135485651984732
Review these studies and articles, and then return to your responses to
the questions raised above. Do these resources offer evidence and/or
other considerations that help refine your responses?
Friendship – and death – online
For most of us, at least (i.e., with the exception of some strands of
transhumanism), the rise of digital technologies and media did not
somehow eradicate our mortality. But the development of these
technologies was (is?) long accompanied by a thematic interest in
“digital immortality.” It turns out that there are deep historical
backgrounds to this interest: these are in fact distinctively Western as
they rest on specific (Western) Christian theological assumptions of a
sharp dualism between an immortal soul and mortal body. This
dualism was secularized (as a disembodied reason vs. an irrational
body) and then “baked into” the underlying assumptions and aims of
modern technologies – including our foundational imaginings and
discourse surrounding a “bodiless cyberspace” (as in William Gibson’s
novel Neuromancer, 1984). Throughout the 1990s, especially US-
based discourse and usages of a primarily US-based internet reflected
these dualisms and affiliated dreams of “digital immortality” (Ess
2011). Especially transhumanism exemplifies the ongoing influence of
http://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia
these ancient assumptions and hopes.
This background is useful, first, as it highlights the historical and
thereby cultural origins of these views: this should make it easier for
students, instructors, and readers from “other” cultural backgrounds
to recognize and reflect upon likely differences as well as
commonalities with your own assumptions and views. Second, it helps
explain the relatively late emergence of “Death Online Research”
(DOR). Beginning approximately in 2012 or so, DOR explores
emerging practices of “digitally mediated grieving and memorialising,”
digital “afterlife,” and so on (DORS4, 2018). Very simply, just as more
or less every other aspect of our lives is now inextricably interwoven
with our digital media practices, including our uses of social media, so
death in all its dimensions and impacts is increasingly expressed
through and in these technologies.
As usual, much of this is beneficent in multiple ways. SNSs and digital
communication channels such as Messenger, WhatsApp, etc., allow
often far-flung family members and friends to learn about the death of
a close friend or relative, share their remembrances and grief,
establish online memorials, etc. In particular, Ylva Hård af Segerstad
and Dick Kasperowski (2015) have documented the experiences of
bereaved parents in a closed Facebook group in Sweden. In highly
secular–rational Scandinavia, death is largely a taboo topic – the
death of one’s child all the more so. At the same time, there is likely
nothing so devastating to a parent as the loss of one’s child – and so
the need to somehow connect up with others who can understand and
be supportive is all the more imperative. No one understands better
than other bereaved parents, and so the possibility of making
connections with this cohort can be life-saving. Consider this SMS
from a bereaved mother:
I’m in a fitting room writing to you. Feel I have to in order not to break
down. It is sooo difficult! Why do we have to go through this. Miss my
child so I don’t think I’ll be able to make it. Tomorrow is two long
months since I hugged my beloved X and I can never again do that.
How is it possible? How can a beloved person who was with me every
day and who was so warm and wonderful be gone? I think I’m going
crazy.
(Hård af Segerstad and Kasperowski 2015, 25f.)
As Hård af Segerstad and Kasperowski further document, this
bereaved mother is now able to take up contact with parents like her –
parents who know first-hand the thousand despairs and inconsolable
grief of losing one’s own child. Here she finds understanding,
acceptance, consolation, and massive help toward eventual recovery of
her ability to reengage with her life and world.
But, as with any novel technology and application, bringing death,
grief, memorialization, and so on into the online world can also be
profoundly problematic. An increasingly common problem is learning
of the death of a close friend or relative not from a parent or relative,
not from a professional counsellor or authority trained in how to break
such devastating news – but from an otherwise well-meaning posting
on the now deceased person’s Facebook page. That is, someone else –
often quite removed from the primary circle of family and friends –
learns of Person X’s death and goes straight to his or her profile to post
a note of sympathy and condolence. This starts a cycle of others
adding their own notes – sometimes well before those within the close
circle of friends and family are informed through more regular and
comforting channels. Moreover, for those closest to the deceased,
these perhaps well-intentioned condolences and expressions of
support can begin to ring hollow. In particular, “vicarious grief”
(Hovde 2016, 101) or “grief hypejacking” (Abidin 2018, 169f.) is now a
well-documented phenomenon in which the poster is more interested
in calling attention to himself or herself among the crowd of mourners.
Interestingly – and another index of our moving into a post-digital era
– in some cases, these online experiences can lead to a rejection of
social media altogether. In her study on “Grief 2.0,” Astrid Linnea
Hovde documents how two of her young informants (“Sophie” and
“Elisabeth”3) found that working through their grief required the real-
world presence of close friends and family. “Sophie,” who lost her
sister, commented:
It is so different to talk to them, cry in front of them and just lay there
with my head on their lap, than to look at her memorial page to see if
someone has written anything there that day.
(Hovde 2016, 54f.)
The contrast between this embodied co-presence and the online world
led “Sophie” to conclude:
I was hiding on Facebook before, when I posted things there, I didn’t
have to talk about how I was doing. It was comfortable being on
Facebook, I didn’t have to face people’s reaction when I talked about
[my sister’s death], that was really hard for me to face. On Facebook
you don’t see the people you’re talking to, so it gets less scary.
(Ibid.)
But, despite the comparative ease of online communication vs. real-
world, embodied communication, the latter was so necessary to her
grieving that “Sophie” made a rather remarkable decision:
But I worked really hard to quit relying on Facebook, and to start
living in the real world again.
(Ibid., 55; cf. Hovde 2016, 51–9)
The growing interest in “digital detox” (Syvertsen and Enli 2019) and
related forms of increasing skepticism toward our online engagements
suggest that “Sophie” and “Elisabeth” are not alone in their responses.
At the same time, however, we have yet to establish a clear
“netiquette” for online mourning – one that would help avoid such
disastrous gaffes as posting a condolence note on a friend’s profile
before close friends and family are informed of their death, first of all
(cf. Sabra 2017).
REFLECTION/DISCUSSION/WRITING EXERCISE: A NETIQUETTE FOR ONLINE
MOURNING?
Given a first exposure to these phenomena and experiences: what kind
of “netiquette” or guidelines for ethically appropriate behavior
regarding death online can you develop?
Minimally, such a netiquette would suggest:
when to post what information about a death?
It would further be sensitive to:
the kind of death involved – e.g., accident, suicide, victim of terror,
disease … – as these different circumstances entail different
possibilities of response from appropriate family members and
authorities, diverse sensitivities (e.g., we may regard a suicide as a far
more private and personal form of death than to be a victim of an
accident or terror?) and so on.
Our ethical and social responses will also vary depending on what
venue(s) our online postings and communication take place in,
beginning with:
more open (e.g., a relatively open SNS profile) ← → more closed (e.g.,
a specific closed group)
A netiquette would further provide us guidelines as to what kinds of
responses are most appropriate and when, e.g.:
Heartfelt expressions of solidarity from close friends and family vis-à-
vis more polite expressions of condolences by more distant friends,
etc.
Such a netiquette would seem to need to differentiate
with whom do I communicate/share when s/he has experienced the
loss I have experienced – e.g., the bereaved mother in our example
above?
Last, but certainly not least: all of this is, of course, highly culturally
variable, as our beliefs, attitudes, practices, etc., concerning death vary
widely from culture to culture. A complete netiquette would not only
provide guidelines to the above sort of factors within a given culture:
more ambitiously, it would also offer guidance for what is appropriate
across the diverse cultures interwoven on any given SNS profile and
communication medium.
ADDITIONAL RESOURCES: AN ETHICS FOR DEATH IN DIGITAL MEDIA?
These thorny questions regarding our sensitive use of SNSs vis-à-vis
death are just the beginning of our ethical challenges. Both online and
offline, the deceased leave behind an extensive digital record – their
emails, text messages, SNS profiles, postings, photographs, etc. Do we
delete or somehow preserve their SNS profile? What are we to do with
a loved one’s tablet and/or phone and/or computer and all of its
records – some of which, almost certainly, she or he would not want us
or anyone else to have access to?
The following are useful resources for beginning to reflect on these
additional ethical issues:
Kathleen M. Cumiskey and Larissa Hjorth (2017) Haunting Hands:
Mobile Media Practices and Loss. Oxford University Press.
Reviews the multiple dimensions of culturally specific notions of grief;
the distinctive features of mobile media that disrupt our earlier
notions of “public” and “private,” as distinctive sites and venues for
grief; and then provides a series of in-depth explorations of “culturally
specific, affect-laden rituals in and around mobile media practices,”
followed by “the ways in which the mobile device can become
haunted” (2017, 5, 19).
Zizi Papacharissi (ed.) (2018) A Networked Self and Birth, Life, Death
London: Routledge. Several of the chapters collected here directly
address the diverse intersections between death and digital media:
Amanda Lagerkvist, The Ethos of Quantification in Bereavement
Online (11–34);
Tama Leaver, Co-Creating Birth and Death on Social Media (35–49);
Catherine Steele and Jessica Lu, Defying Death: Black Joy as
Resistance Online (143–59);
Crystal Abidin, Young People and Digital Grief Etiquette (160–74).
The latter two are especially useful as they extend the scope of
research across racial and cultural boundaries (Abidin’s material is
drawn from Singapore).
Slow technology and the Fairphone
Another indicator that we may be in a post-digital era is the increasing
interest in “slow technology” and “slow design” approaches (Weiser
and Brown 1996). Lars Hallnäs and Johan Redström define slow
technology as “a design agenda for technology aimed at reflection and
moments of mental rest rather than efficiency in performance” (2001,
201). These approaches have gradually gained ground in recent years:
Norberto Patrignani and Diane Whitehouse (2018) present slow tech
as an approach that
offers people more time for reflection and for the processes needed to
design and use ICT that takes into account human well-being (good
ICT), the whole life cycle of the materials, energy, and products used
to create, manufacture, power, and dispose of ICT (clean ICT), and the
working conditions of workers throughout the entire ICT supply chain
(fair ICT).
(2018, 1)
Their focus on “human well-being” points precisely toward virtue
ethics’ defining aims of flourishing and good lives – precisely the key
commitments of virtuous design (Spiekermann 2016). Patrignani and
Whitehouse further argue for affiliated ethical commitments to design
that takes on board the imperatives of environmental sustainability
and matters of fairness and justice. Nor can these be dismissed as
somehow utopian or merely theoretical. Rather, Patrignani and
Whitehouse foreground real-world design projects – including by the
Italian companies Olivetti and Loccioni, as well as the Dutch-based
Fairphone – that exemplify slow-tech design approaches. As well,
Patrignani and Whitehouse argue that the requirements for
“responsible research and innovation” now built into the European
Commission’s major funding project, its Horizon 2020 program,
likewise require researchers and their collaborators to take on board
some of the ethical commitments involved here (so also: Stahl,
Timmermans, and Mittelstadt 2016).
Case-study: Are you ethically obliged to
purchase a Fairphone?
The Fairphone is advertised as follows: “We’ve created the world’s first
ethical, modular smartphone. You shouldn’t have to choose between a
great phone and a fair supply chain” (www.fairphone.com/en). “Fair”
here is understood as “fair trade”: the phone began in the (more
deontological) Netherlands as part of a campaign highlighting the role
of “conflict minerals” (including tin, tantalum, and gold). These
minerals are essential to the production of smartphones (as well as
virtually all other electronics, including our computers, tablets, digital
cameras, etc.). Western supply chains have sourced these minerals
from the Democratic Republic of Congo – and so they are
bloodstained by that country’s civil war. The campaign’s originators,
Peter van der Mark and Bas van Abel, led a company that produced
Fairphone 1 in 2013, and Fairphone 2 in 2015 (Akemu, Whiteman, and
Kennedy, 2016, 1). Maja van der Velden characterizes “fair” here as
including “a people-first approach, fair and conflict-free resources, the
use of recycled materials, e-waste solutions across the supply chain,
fair technical and design specifications, and transparent pricing”
(2014, 6). This includes specific attention to “Good Working
Conditions,” with a view toward improving “worker satisfaction and
representation,” and helping to move “the electronics industry towards
zero exposure of workers to toxic chemicals in the manufacturing
process” (www.fairphone.com/en/our-goals/social-work-values). In
these directions, the Fairphone was the first to incorporate Fairtrade
gold in its supply chain. And Fairphone has received Greenpeace’s
highest grade for green electronics
(www.greenpeace.org/usa/reports/greener-electronics-2017). In these
ways, the Fairphone is a primary example of slow technology design
(Patrignani and Whitehouse 2018, 125f.).
Fairphones 2 and 3 are built out of modules. Should a particular
component break, such as the screen or microphone – replacement
modules can be ordered and installed quite easily. Ditto for modules
that might be improved in subsequent development, such as the
camera. This modular design thus extends the life of the phone by
encouraging “the reuse and repair of our phones, researching
electronics recycling options and reducing electronic waste worldwide”
(www.fairphone.com/en/our-goals/recycling).
The Fairphone 3 is a medium-level smartphone, priced at €450. A
comparable phone from a larger manufacturer costs less: as with other
http://www.fairphone.com/en
http://www.fairphone.com/en/our-goals/social-work-values
http://www.fairphone.com/en/our-goals/recycling
Fairtrade products, the higher price reflects the company’s efforts to
provide better working conditions and wages for those who assemble
the device, and, specifically, to avoid conflict minerals, as well as
resources mined by child slaves.
1. After reviewing these and other aspects of the phone’s design and
Fairtrade aims, consider the question: are you ethically obliged to buy
such a phone instead of a phone from one of the larger, more well-
known brands?
You can begin to think about this in the ethical frameworks we have
explored most fully – utilitarianism, deontology, and virtue ethics.
Develop your initial reflections on the question, using each of these
frameworks as a starting point.
2. Before drawing any further conclusions, you can add the following
points to your ethical frameworks.
(A) We have seen that the emergence of more relational selves in
Western countries is affiliated with emerging notions of “distributed
morality” (Floridi 2013) and “distributed responsibility” (Simon 2015).
That is – in contrast with more individualistic emphases in deontology
and utilitarianism – given that we are inextricably interwoven with our
larger communities, and precisely by ways of the digital technologies
that infuse and define our lives, so it seems that our ethical choices
and responsibilities are “stretched out” and over these networks. (The
same holds true for notions of relational autonomy.)Floridi’s
examples of distributed responsibility include “the Shopping
Samaritan,” that is, consumers who chose “Red” products from major
brands which in turn donate to a fund dedicated to treating and
eradicating AIDS/HIV. At the time of this writing, Red has raised over
US$600 million, thus helping “more than 140 million people with
prevention, treatment, counselling, HIV testing and care services”
(www.red.org/how-red-works).
Both Red products and the Fairphone, along with Fairtrade products
more generally, thus foreground how our purchasing choices have
consequences for others across the globe – and offer ways to help
improve the lives of others.
Question: in your initial responses to question 1, above – what
assumptions did you make about the scope or reach of your ethical
responsibilities? That is, did you presume a more individual and
restricted sense of responsibility – and/or a more relational and
distributed sense of responsibility?
(B) Floridi’s “shopping Samaritan” points to a central distinction in
ethics – namely, between primary but minimal levels of obligations
and duties vis-à-vis what philosophers like to call “supererogatory”
obligations (Heyd 2016). Judith Jarvis Thomson (1971) specifically
discussed a “Good Samaritan Ethics” to mark out those ethical choices
that go above and beyond our usual expectations and requirements –
as exemplified in the story of the Good Samaritan in the Christian
Scriptures (Luke 10:30–7). But while such choices may be exemplary,
even heroic, we recognize at the same time that they are admirable
precisely because they go beyond our everyday expectations and
norms.
In light of this distinction, we can thus revise question 1: are you
morally obliged to buy a Fairphone – and/or is buying a Fairphone
instead a morally exemplary act, one that we can endorse for those
who can afford it, but one that we cannot argue is ethically obligatory
for all of us, e.g., students and others on limited budgets?
Again, your responses here may vary somewhat, depending on the
initial ethical frameworks you take up.
(For a more comprehensive ethical analysis of the Fairphone, see Ess
in press.)
Digital media and democratization: First
considerations
In the early 1990s, the emerging internet and then the World Wide
Web were frequently accompanied by fervent hopes and claims that
these technologies would – perhaps inevitably – lead to greater
democracy around the globe. Throughout the early 2000s, there were
heartening examples supporting this optimism (e.g., Wheeler 2006).
The most dramatic examples were the Arab Springs of 2011 – the pro-
democracy movements begun in Tunisia and then spreading to
Algeria, Morocco, Yemen, Bahrain, Egypt, and Syria. These
movements were initially heralded as “Facebook revolutions” or
“Twitter revolutions” precisely because of their central reliance on
social media (Howard et al. 2011, 3). But, as with the failed 2009
protests against Iranian President Mahmoud Ahmadinejad (protests
dramatically fueled by a video clip showing the young philosophy
student Neda Agha Soltan being shot by government security forces, a
clip that went viral on YouTube and Twitter with the hashtag #neda),
the Arab Springs soon collapsed into the Arab Winters. That is, apart
from the exception of Tunisia, the authoritarian regimes in these
countries remained intact – if not all the more repressive and in
control of their populations, thanks especially to the “total
surveillance” made possible by these same social media and related
technologies. More broadly, the 2018 report of “freedom on the net”
starkly concludes that:
Disinformation and propaganda disseminated online have poisoned
the public sphere. The unbridled collection of personal data has
broken down traditional notions of privacy. And a cohort of countries
is moving toward digital authoritarianism by embracing the Chinese
model of extensive censorship and automated surveillance systems. As
a result of these trends, global internet freedom declined for the eighth
consecutive year in 2018.
(Shahbaz 2018, 1)
This is not to say that all hope for these technologies as technologies of
democratization and liberation is lost: on the contrary, we will see that
there remain bright spots and developments that at least partially
counter these darker pictures. But, among all the questions and issues
they evoke, central for us, of course, are their ethical dimensions.
Broadly we may (must) ask: What are the ethical values, frameworks,
duties, and/or virtues of those of us who wish to become/remain
citizens in a (post-)digital democracy?
To get to these questions, we first need to ask: What do we mean by
“democracy?” We will explore these matters, especially, as diverse
conceptions of “democracy” have interacted with the rise of
communication technologies – defined here as beginning with orality
(as in Marshall McLuhan and Medium Theory), then electric media
(specifically, television), and then digital media per se.
Democracy, technology, cultures
Not surprisingly – i.e., given their origins and primary spheres of
development – the internet and internet-facilitated communication
are deeply rooted in the cultural backgrounds and assumptions of the
United States. And, contra the assumptions of “technological
instrumentalism” – roughly, the idea that technologies are “just tools,”
somehow value-free or neutral – what has become very clear over the
past 20 years or so is rather that our technologies embed and reinforce
our fundamental cultural values and norms, whether we recognize
these or not. The same holds specifically for this early optimism – if
not utopianism. As James Carey (1989) has noted, the Federalist
Papers (1787, 1788), in debating the proper role of the hoped-for
United States federal government, argue that one of the
responsibilities of such a government is to subsidize canals and roads
– precisely for the sake of democratic polity. This is because a core
process of democracy is dialogue and debate among citizens. But,
beginning with Plato, there have been arguments that democracy
would thus be “naturally” limited. Very simply, in preliterate days –
when orality was our primary communication technology – such
debate and dialogue would require face-to-face presence. Such
presence in turn is limited by available transportation, either on foot
or by animal. To make democratic dialogue and debate possible within
a new nation spanning the original 13 colonies would thus require
more advanced transportation technologies – precisely the roads and
canals under discussion – in order to overcome the otherwise quite
modest “natural” limits of democracy. Carey argues that this
understanding of communication technologies as undergirding
democratic values and aspirations became a definitive strand of US
culture. Hence, it was not surprising to see the rationales for globally
expanding the internet – as almost exclusively “born and raised” in the
USA – to include at the forefront this centuries-old US optimism that
communication technologies more or less inevitably improve the
processes of democracy.
A first problem with these early claims, however, is: What do we mean
by “democracy?” For many early proponents of electronic or online
democracy, the presumption was that the internet would facilitate
some form of direct or plebiscitary democracy – for example, through
instantaneous polling or votes. Such plebiscite arrangements,
however, have long been criticized for their capacity quickly to turn
anti-democratic as they are prey to the problem of “the tyranny of the
majority.” In contemporary terms, the wisdom of the crowd can
quickly turn into the madness of the mob. Moreover, as Jean Beth
Elshtain warned vis-à-vis television voting experiments in the 1980s,
such voting lets us confuse “simply performing as the responding ‘end’
of a prefabricated system of external stimuli” with democratic
participation (1982, 108, in Rheingold 1993, 287). Especially as new
media and digital media are increasingly driven by the frameworks
and assumptions of consumption and entertainment, political
theorists Marcel Henaff and Tracy Strong presciently observed early
on that “the main public space of our time is that of consumption;
hence the political is subjected to its logic and has come to be assessed
by the criterion of the image” (2001, 26). But consumer “choice” is
relentlessly assaulted by ubiquitous advertising appealing to our
individual tastes, desires for convenience, and so forth – all the more
so as the massive amounts of data collected about our browsing,
choices on streaming services, social media use, credit card use, et
cetera ad nauseam, allow advertisers to “micro-target” their ads to
each of us individually. However pleasurable and rewarding our lives
as consumers may be, such choices are starkly different from those
assumed and required by democratic processes and governance:
minimally, such citizen choices are to be shaped by reasoned debate
and with at least some view toward the larger good, not simply one’s
own.
In such consumer-oriented models of decision-making, then:
Democracy thus loses its rationality. Images displace arguments.
Debates are turned into games. The show never stops. All games
become interchangeable; the political stage tends to be no more than
one among others.
(Henaff and Strong 2001, 26f.)
In the worst case, as Elshtain warns, “plebiscitism is compatible with
authoritarian politics carried out under the guise of, or with the
connivance of, majority views. That opinion can be registered by easily
manipulated, ritualistic plebiscites, so there is no need for debate on
substantive questions” (Elshtain 1982, 108, in Rheingold 1993, 287).
These warnings now seem strikingly prescient, especially as
exemplified by the US presidential debates leading up to the election
of Donald Trump – a reality TV star who is clearly adept at
manipulating media attention to his advantage – in 2016. By the same
token, the risks to democratic debate and deliberation posed by such
electronic plebiscitism are further manifest in various forms of fake
news “going viral” (perhaps with intentional manipulation) and
thereby gaining the appearance of truth or popular consensus.
Responding to these early critiques, scholars and theorists interested
in the democratization potentials of computer-mediated
communication frequently turned to the theories of Jürgen Habermas.
Habermas’s account of democratic forms of debate and dialogue focus
on an “ideal speech situation” that would ensure equal voice to all
participants in the decision-making that directly affected them. While
highly contested, some version of Habermasian deliberative
democracy has remained an important theoretical alternative to more
plebiscite notions of democracy. In particular, Habermas’s early
emphasis on exclusively rational (if not simply masculine) forms of
debate was effectively criticized and amplified by a number of
feminists. So Seyla Benhabib (1986) and Iris Marion Young (2000),
for example, affirm from feminist perspectives and experience the core
intuition that democracy involves free and equal debate that should
shape the decisions that affect us. But they go on to argue that such
equality requires precisely the inclusion of the voices that an
excessively rational (if not bluntly masculine) model of debate has
historically excluded, namely the voices of women and children. Part
of Habermas’s response to early critiques along these lines was to
emphasize solidarity and (empathic4) perspective-taking as necessary
conditions for (ideal) democratic discourse – the practice of
attempting empathically to understand and take on board not only the
(largely) rational arguments but also (sometimes more affective or
emotional) experiences of those with whom we engage in dialogue.
Such (empathic) perspective-taking then serves as a bridge leading to
more forthrightly feminist insistence that our notions of democratic
debate must conjoin (often more affective) narrative with (often more
rational) argument. Finally, as with earlier, more plebiscite visions,
proponents of these more Habermasian and feminist understandings
of participatory dialogue likewise hope that these ideals of egalitarian
dialogue and debate can be more fully realized by exploiting the
multiple forms of communication and interactivity made possible
through networked digital media. In particular, May Thorseth (2006,
2011) helpfully documents how these more inclusive understandings
of what is required for fair and equal dialogue are taken up in
contemporary notions of deliberative democracy and a number of
important efforts to realize such ideal speech situations and
deliberative process in online environments.
We will see in chapter 6 that especially these revised forms of
Habermasian conceptions remain distinctively influential in northern
Europe and Scandinavia, as part of a larger preference here for more
deontological ethics. Again, culture makes a difference – in terms both
of ethics and of our conceptions of technologically mediated
democracy. At the other end of the spectrum, for example, many of the
values and ideological commitments surrounding both computing
technologies and then the internet were shaped (again) by a distinctive
US vision of “techno-liberation.” To be sure, the notion that modern
technologies are key to various forms of liberation and democracy is
rooted in the European Enlightenment – both broadly in its embrace
of a Cartesian vision of science and technology helping to free us from
labor (if not death) and, more specifically, in what Mark Coeckelbergh
calls the “material romanticism” of Marx and Engels in The German
Ideology (1846): these authors observe that “slavery cannot be
abolished without the steam-engine” ([1846] 1976, 38, in
Coeckelbergh 2017, 37). These notions unfolded in the US
communitarian counterculture of the 1960s, but, as Lincoln Dahlberg
observes, “techno-liberation” understandings of such computer-
mediated liberation became increasingly individualized – emphasizing
first of all “virtual communities, places where individual minds met,
free (supposedly) from many of the physical, normative, and legal
constraints of offline embodied life” (2017, 2; emphasis added).
Dahlberg argues that the presumption of disembodied minds, thereby
physically isolated from one another, thus leads to “individualism and
a more individualist conceptualization of freedom” (ibid.). US history
and romantic recreations of the American West specifically play in
here: “Computer networking was referred to in ‘pioneering’ and
‘homesteading’ metaphors, invoking the adventurous exploration and
settlement of a newly found, untamed, and thus unregulated space by
free and self-regulating individuals (see, e.g., Rheingold, 1993)”
(Dahlberg 2017, 2).
From here, “cyberspace” becomes increasingly conceived of as the
space and engine of a “cyberlibertarianism” that emphasizes
individual freedom from the larger community. The strong resistance
against governmental regulation, taxation, and so forth that
characterizes the US-based tech giants is not simply an artifact of
business interests in minimizing costs and maximizing returns to
shareholders: it is more centrally a culturally rooted ideology that
continues to pervade Silicon Valley in its many manifestations.
Whatever one’s own views of this may be, beyond noting the culturally
specific roots of these beliefs and assumptions, the larger point here is
that such versions of cyber-libertarianism are also strongly consonant
with plebiscite democracy – that is, the emphasis on individual
responses without further ado. Recall here as well Elshtain’s early
warning that “plebiscitism is compatible with authoritarian politics
carried out under the guise of, or with the connivance of, majority
views. That opinion can be registered by easily manipulated, ritualistic
plebiscites, so there is no need for debate on substantive questions”
(1982, 108, in Rheingold 1993, 287). To state this more harshly: US
cyber-libertarianism, as encoded in both the technologies and the
corporations that produce and package them, is thus deeply resonant
with the emergence of “fake news” and other forms of voter
manipulation in Brexit and the US presidential campaign enabled by
these technologies and their producers. Not to mention with the rise of
“digital authoritarianism.”
These concerns are further amplified through a second line of critiques
of early “techno-utopianism,” as it is also sometimes called. To begin
with, our contemporary concerns with “filter bubbles” (Pariser 2011) –
the prevailing design of SNSs and other sources to feed us news and
information that we already agree with, so that we will continue to stay
online and thereby increase advertising revenues – were articulated
early on by Cass Sunstein in 2001. Sunstein identified these capacities
and effects as the problem of “The Daily Me.” The internet and the
Web allow me to filter and choose only those contents I prefer to
consume: more predominantly in recent years, the Big Data profiles
constantly compiled on my browsing and other online behaviors in
turn drive the algorithms and AI systems that increasingly determine
what appears on my screens. Directly contrary to the ideals of
democratic dialogue that force us to confront differing views with our
best evidence and argument – and at the risk that we may sometimes
be proven wrong – we are far more comfortable with retreating into
these cozy nests with those who agree with us and whose views we
already endorse. The result is both fragmentation (a retreat from
dialogue) and polarization (the end of dialogue) (Sunstein 2001, 65).
By 2011, it was already clear that “the powers that be” – both well-
entrenched political parties and their (oftentimes wealthy) supporters
– are quick to learn how to use new media in ways that reinforce their
own place and power, contra democratizing efforts that might
challenge these (Howard et al. 2011; Stromer-Galley and Wichowski
2011; cf. Ess 1996: 198–212). In these lights, the Cambridge Analytica
scandals are but the most recent and egregious examples of the
willingness of political parties – even in otherwise historically
democratic societies such as the US – to exploit these affordances of
internet-facilitated communication for the sake of sustaining and
expanding their own power, whatever the costs to democratic norms
and processes. Can “digital authoritarianism” be far behind?
Habermasian and feminist ideals of rational and empathic debate
amongst equals who regard one another with respect have been
further challenged by still other current developments. Jodi Dean
(2009), for example, has argued that democracy via media
technologies is undermined not only by filter bubbles; in addition,
what she calls “communicative capitalism” depends on monetizing our
online engagements in other ways as well – perhaps most centrally, by
various forms of commodification and self-commodification. Whether
“friendship” online or, more obviously, efforts to acquire wealth and
fame by presenting ourselves online in ways that we hope will attract
“likes,” followers, and thereby more revenue (e.g., contemporary
“YouTube stars” and “influencers”), our communicative spaces thereby
largely reinforce our extant convictions and beliefs (cf. Papacharissi
2010; Lindgren 2017).
As a second example: Dal Yong Jin has analyzed what he calls
“platform imperialism” (2015, 4). Jin observes how a platform such as
Google enables – and constrains – all of our communication, from
SNSs, search engines, to smartphone use, etc. Not surprisingly, Google
is joined by Apple, Facebook, and Microsoft as the four major
transnational corporations (TNCs) that design, implement, and
control our platforms. As with the music industries’ efforts to combat
illegal copying (see chapter 3, p. 99), the US supports the dominance
of these corporations and platforms through its regimes and
enforcement of copyright law (Jin 2015, 100–20). Contra hopes for a
democratizing internet, Jin argues that “Instead of developing a public
sphere, these platforms are enhancing the corporate sphere” (ibid.,
185). Worse still, contra promises of greater democracy, equality, etc.,
the primary effects of platform imperialism will be to “intensify the
asymmetrical power relationships between countries possessing
platforms and countries using platforms invented in the U.S.” (ibid.;
cf. Zuboff 2019).
These developments give us good reason, then, to worry about the
future of democratic norms, rights, and processes. Indeed, these are
under direct attack on two fronts – one, as noted above, as more and
more countries follow the lead of China and its emerging SCS to use
the vast surveillance powers of internet-facilitated communications to
monitor and control citizens’ behavior. Two, it is by no means clear
how far “platform imperialism” (Jin 2015) or “surveillance capitalism”
(Zuboff 2019) will be restrained by US laws and regulations – most
especially as these corporations and platforms operate beyond the
borders of the US. Against these dark backgrounds, there are some
bright spots, however. To begin with, as we saw in chapter 2, the
European Union continues to increase its data privacy protections,
precisely as privacy is recognized as one right among many that are
foundational to human dignity, autonomy, and thereby democracy.
Indeed, these value commitments are central to emerging
development of Artificial and Independent Systems – both within the
EU and, more broadly, in the IEEE development of “ethically aligned
design” (IEEE 2019). It may also be that the anti-democratic threats of
“fake news” are receding (Guess, Nyhan, and Reifler 2018).
Moreover, Merlyna Lim (2018) has recently published the results of a
longitudinal study of activist movements since 2010 in Tunisia, Egypt,
Malaysia, and Hong Kong. Contra the bias of especially Western news
sources in highlighting the importance of social media in these
movements, Lim shows that successful protests – ones that further
lead to enduring political and social change – depend not solely on
social media: in addition, “the human body” is “the most essential and
central instrument” in what she characterizes as “Hybrid human–
communication–information networks that include social media”
(2018, 129). In other words, in a post-digital era, we recognize that
democracy will not flow automatically from social media and its
affiliated infrastructures. On the contrary, as the report on “digital
authoritarianism” makes clear, these technologies can be used with
equal force to censor, suppress, and control subject populations. If
democracy and its attendant norms and values are to be established
and preserved, embodied resistance and activism are also required.
(Similar lessons are learned from the #sayhername movement against
racist violence directed at women of color: Schwartz 2019.)
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS: DEMOCRACY AND
DEMOCRATIZING ETHICS IN A DIGITAL AGE?
1. WHAT DO WE MEAN BY “DEMOCRACY?”
We have seen that there are at least two distinctive conceptions of
“democracy,” beginning with a strongly influential libertarian view
that emphasizes plebiscite forms of democracy, vis-à-vis feminist and
Habermasian accounts that emphasize, rather, the importance of
dialogue and debate shaped by rational argument, diverse narratives,
and ethical commitments to equality, freedom, solidarity, and
perspective-taking. (Depending on how far you and your class care to
go in these directions, you can also explore a third alternative –
communitarian democracy – which stresses service to the common or
public good: see Abramson et al. 1988, 22–5 for an early account.)
Given these two poles as a starting point:
(A) Articulate as best you can what you see as the best and most
desirable form(s) of democratic polity and processes – especially as
these might be facilitated by digital media and networked
communication.
(B) Identify where your notion of democracy lies on a continuum
between the two poles of more libertarian or more
feminist/Habermasian forms of democracy.
(C) Can you offer arguments, evidence, and/or other forms of warrant
that would support and justify your choices? These can come in at
least two forms: arguments, etc., for your own choices, and/or
arguments, etc., criticizing the alternative(s).
2. THE ETHICAL REQUIREMENTS OF DEMOCRACY?
In an early effort to apply Habermasian and feminist thought to the
topic of online democracy, I concluded by observing that the discourse
ethic requires the ability to engage in critical discourse and the moral
commitment to practicing the ability to take others’ perspectives and
thus seek solidarity with others in a plurality of democratic discourse
communities (Ess 1996, 220).
In the ethical frameworks we have examined here, we can rephrase
this to include two ethical components:
a deontological insistence on respecting the arguments and
experiences of Others as equals in a shared discourse community;
and
a virtue ethics argument that the correlative perspective-taking
required for a free and equal dialogue and debate is an ability that
must be practiced – i.e., such perspective-taking stands as a habit
of excellence or virtue that requires practice if it is to be acquired
and exercised well. By the same token, the Habermasian
requirement for empathic solidarity likewise invokes the primary
virtue of empathy – again, a capacity or ability that must be
acquired and practiced.
Given your definitions and affiliated requirements for “democracy” as
you have outlined above, how far are either of these ethical dimensions
necessary for fostering democracy – whether online or offline – as you
understand it?
Notes
1 A positive “util” is a proposed unit of pleasure; a negative “util”
would quantify a dissatisfaction, pain, etc. The util was proposed as
a unit of measurement by nineteenth-century economists inspired
by utilitarianism and specifically Jeremy Bentham’s “hedonic
calculus” as a first effort to maximize pleasure and minimize pain in
straightforwardly quantitative ways. The term is commonly used to
illustrate utilitarian approaches and economic notions of utility
(e.g., Baumol and Blinder 2011, 85). And if you have difficulty
assigning “utils” (either positive or negatively), this is because,
despite best efforts otherwise, no one has succeeded in defining
such a measurement in any consistent way – first of all, because our
experiences of pleasure and pain vary widely (ibid.). This difficulty
in fact highlights a serious limitation of utilitarianism: see the
discussion of this problem in chapter 6, pp. 221–2.
2 I use “national/cultural” to mean “national and/or cultural” – that
is, making clear that “culture” is not always synonymous with a
given nation-state. For example, many nation-states encompass
multiple cultures as defined by distinctive languages and dialects:
these in turn are often distributed across national boundaries, for
example, as French is shared among the francophone countries
(themselves widely diverse in terms of other cultural elements).
3 These are pseudonyms used to protect the confidentiality of
Hovde’s informants.
4 Keep in mind that empathy is a virtue – i.e., a capacity or ability
that requires active cultivation and practice. In this way,
Habermasian conceptions, while rooted in deontological ethics, also
shade into virtue ethics.
CHAPTER FIVE
Still More Ethical Issues: Digital Sex, Sexbots,
and Games
The plethora of available online pornographies guarantees that
virtually any stance on porn can be backed up with multiple examples
supporting one’s argument.
(Paasonen 2011, 432)
People will want better robot sex, and even better robot sex, and better
still robot sex. … what are perceived to be natural levels of human
sexual desire [will] … conform to what is newly available – great sex
on tap for everyone, 24/7.
(Levy 2007, 310)
I’m not saying video games make you a killer. But if you’re a
psychopath, video games help you get in the mood to do the killing.
Pat Brown (CNN 2012)
Chapter overview
We begin with the ethics of pornography*1 – depending first of all on
how we may define it: not surprisingly, definitions are dramatically
culturally variable. As with violence in games, a central question is
how far production and consumption of these materials impacts our
real-world attitudes and behaviors. We examine basic philosophical
and religious frameworks that shape contemporary reflections on
sexuality and identity and then explore central ethical issues here in
terms of utilitarianism, deontology, feminist ethics / ethics of care,
and virtue ethics. (If you have not already reviewed these frameworks
for ethical decision-making in chapter 6, you should do so before
moving into this chapter.). Increasingly sophisticated “sexbots” offer
new – and literally more embodied – variations on the ethical
dimensions in play here. We then turn to the central questions of how
far computer-based gaming experiences of violence and rape may
affect offline behaviors and attitudes – again using utilitarianism,
deontology, and virtue ethics as frameworks for analysis.
Introduction: Is pornography* an ethical
problem – and, if so, what kind(s)?
To state the obvious: the internet and digital media more broadly are
awash with pornography* of every imaginable stripe and genre. The
increasing diffusion of internet-connected digital media means that
more or less anyone who cares to do so can easily consume, produce,
and distribute sexually explicit materials” (SEMs), to use the ethically
more neutral term. At the same time, the complex interplays between
digital media and the larger spheres of our lives mean that
pornography*, however we may define it, is thoroughly infused
throughout contemporary societies. These complex interplays have
been further amplified by Web-based technologies and
communication venues – most commonly, social networking sites
(SNSs), micro-blogs (e.g., Facebook status updates and Twitter), and
“produsage” sites such as YouTube, and, increasingly, the Dark Web
(Gehl 2016). All of this is further coupled with ever more predominant
internet access via mobile devices: anyone with a smartphone can
easily record still images and videos and then upload them for all of
the internet world – currently nearly two-thirds of the planet’s
population – to see.
These facilities and capacities thereby both continue and dramatically
expand earlier forms of amateur pornography, for example, while
simultaneously enabling new forms of SEMs such as Netporn, which
blurs the “boundaries of porn producers and consumers,” and thereby
entails nothing less “than a redefinition of pornography as a cultural
object in terms of esthetics, politics, media economy, technology and
desire” (Paasonen 2010, 1298). Such redefinition, in particular, occurs
within the subgenre of alt porn, defined in part by “its exhibition of
non-standard subcultural styles, community features and interaction
possibilities” (ibid., 1299).
By the same token, increasing access to the internet by way of mobile
devices – i.e., devices that can (and usually do) accompany us more or
less everywhere – dramatically complexifies the contexts for the
consumption and production of SEMs. For example, one of the
premier scholars and authorities in these domains, Feona Attwood
(2018) shows in fine detail how the complex interactions between
sexuality, gender, sexual identity(ies) and representation vis-à-vis our
rapidly changing and diffusing technologies over the past 20 years or
so have first of all led to a concomitant diffusion of all things sexual –
including ever more diverse forms and expressions thereof – into ever
more public spheres. Against the background of this expanding
spectra of sexual identities, practices, gender, and so on, as interwoven
with media, our focus here on pornography* represents but one thread
among many in these domains. At the same time, as Attwood’s book
indexes, serious study of pornography* has come out of the academic
closet in recent years. For example, the journal Porn Studies has been
in publication since 2014.
These recent developments – that is, expanding forms and expressions
of sexuality as entangled with ever more communication venues that
are ever more interwoven throughout our lives, coupled with a
dramatically growing body of research literature – profoundly
complicates our approach to pornography and SEMs. First of all, the
increasing diffusion and presence of “sex media” throughout our
lifeworlds2 have thereby made the difficulties of defining what counts
as pornography* – i.e., sexually explicit material that is potentially
questionable for at least some audiences and age groups – that much
more complex. Just having to do with (relatively) explicit
representations of sex and sexuality is hardly enough to count, at least
in many of the increasingly secularized societies of the West. Second,
researching pornography* likewise becomes that much more
sophisticated and detailed. To begin with, both utilitarian and
deontological approaches arguing for restrictions against
pornography* very frequently depend on the claim that such materials
entail significant harms (and thus negative utils), such as greater
sexual aggression toward children, girls, and/or women. As also holds
for efforts to restrict violence in games, such claims of effects,
however, are intrinsically difficult to establish empirically: beyond the
standard problem that correlation between porn or game
consumption and higher rates of (sexual) aggression, for example,
does not prove causation – empirical researchers are faced with ever-
changing environments that are increasingly infused with sexual
representations of many sorts: being able to isolate porn consumption
as a single variable leading to increased aggression becomes
increasingly difficult indeed (Nash et al. 2015).
Third, this mediatization of sex and sexuality thereby intersects with
the larger patterns of mediatization – meaning the various ways in
which we use digital (and analogue) media to represent ourselves and
our lives, both to ourselves and to others: again, as digital media
continue to diffuse into every corner and wrinkle of our lives, so more
and more of our lives are experienced through and with these media.
The pocketfilm Porte de Choisy, for example, which otherwise violates
earlier notions of bedroom and bathroom privacies, can be understood
as simply an extension of our increasing ability to record and present
ourselves via digital technologies (Verrier 2007). Re-presenting
ourselves through the resulting artifacts – whether in the form of a
text-based blog, an online photo album, a home-made video – is a way
of communicating with one another in enhanced ways, ways that are
more enjoyable because they are quick, convenient, engage more of
our communicative senses (sound and vision, not simply reading), and
are globally accessible. Relatedly, various forms of “sexting” – sending
sexually suggestive or simply explicit images (e.g., “dickpics”) – are
becoming more widespread. And this is not only among young people,
whose exploitation of SNSs such as Snapchat and Instagram in these
directions may be of considerable concern and/or the occasion of
another “moral panic” (cf. chapter 1, pp. 9–10). In addition, the
world’s richest man, Jeff Bezos (founder of Amazon), was recently
caught up in a power struggle with a US media conglomerate over the
latter’s publication of Bezo’s intimate texts and pictures. In Bezos vs.
American Media, business and media empires – and perhaps nothing
less foundational than freedom of expression – may be at stake.
Further, as Anna Reading (and others) argue, as we are the ones who
take charge of and direct these media productions, we thereby (re)gain
agency and control over our media self-representations (2009). Such
mediatized self-revelation may then be experienced as a form of
empowerment and liberation in an age of surveillance. The same may
be true, at least in part, regarding sexuality and gender: an especially
strong argument in favor of online SEMs and their amateur
production is precisely that these allow persons to explore otherwise
marginalized sexualities (including GLBTq, i.e. gay, lesbian, bisexual,
transgendered, and/or queer) and sexual preferences (e.g., bondage,
discipline, dominance, submission, sadism, and/or masochism – S&M
for short [Thorn and Dibbell 2012]) – and thereby to determine for
themselves their own sexual identities and preferences. Pornography*
may thus serve nothing less than the (high) modern values of
emancipation, autonomy, agency, and equality (cf. Bromseth and
Sundén 2011). To be sure, this line of argument directly contradicts
ethical objections to pornography* and SEMs as objectifying women
and children (and, in some instances, men): encouraging us to see
women, children, and/or men as “just meat,” such objectification
obscures, if it does not eliminate, their agency and autonomy (Adams
1996). And without agency and autonomy, there is no person “there”
to be emancipated or regarded as an equal.
Finally, cross-cultural perspectives make all of this that much more
difficult. Not surprisingly, judgments and attitudes regarding bodies
and sexuality vary dramatically from culture to culture. For example,
at least early in this century, material that merely implies sex, such as
beauty pageants, counted as pornography* in India, for example
(Ghosh 2006); in Indonesia, the term is bound up with laws regulating
women’s clothing and demeanor, including public displays of affection
(Lim 2006, both cited in Paasonen et al. 2007, 16). By contrast, in
1969, Denmark was the first Western nation to legalize pornography*
(Time 1969) – and not accidentally. In Denmark, and Scandinavia
more broadly, bodies and sexuality – including the sexuality of
children and adolescents – are widely regarded as simply positive
aspects of human nature and experience. Especially in Denmark, there
is less concern with pornography* as a possible problem, especially for
young people (Haddon and Stald 2009). Historically, in European
countries more generally, children were more concerned with the
problem of cyberbullying than with unwanted exposure to SEMs
(Livingstone et al. 2011: 25). While the most recent EU KidsOnline
survey data have yet to be completely analyzed and published, one of
the striking findings in Norway is that, while sexting behaviors have
gone up (e.g., among 15- to 17-year-olds, who report the highest levels
of these behaviors, 49% of boys, 36% of girls) – along with exposure to
pro-ana, self-harm, suicide sites, etc. – accessing pornography sites as
such has somewhat declined: from 46% of children between 9 and 17
years old in 2010 to 40% in 2018 (Staksrud and Ólafsson 2019).3
In these contexts, what counts as pornography*? For example, a Jeff
Koons painting of himself and his wife, former porn-star Cicciolina, is
delicately titled Ice – Jeff on Top Pulling Out; as it portrays genitalia,
the image would certainly be unpublishable in US newspapers. But it
appeared without further ado in the perfectly serious Danish
newspaper Politiken as part of an article covering an exhibition on
eros at the Aarhus Art Museum (Hornung 2010). Moreover, in what
some have called a post-feminist era (i.e., one in which gender equality
has – allegedly – largely been achieved and hence the feminist work
toward such equality is no longer necessary), prominent phenomena,
such as the “#freethenipple” campaign, exploit tactics such as public
exposure of women’s breasts as a form of protest against “the
sexualization of the breast”: this is part of larger work against
patriarchy, defined as “the source that takes away women’s power to
choose, devalues them and their ability to be themselves and to enjoy
their bodies” (Rúdólfsdóttir and Jóhannsdóttir 2018, 134, 142). In
some instances, women protesters will perfectly replicate the tropes of
pornography* (e.g., appearing to perform oral sex on dildos, as
members of the FEMEN movement did in protest at a G7 meeting) –
precisely in the name of women’s choice and emancipation. You will
find it difficult, however, to come up with stories, much less images, of
these women in newspapers of record (e.g., the Guardian, the New
York Times, and so on), but neither would we expect them to publish
the Jeff Koons painting.4
A particular difficulty here is that, as compared with Europe – and
especially Scandinavia – attitudes and judgments regarding bodies,
sexuality, and thus pornography* in the United States are considerably
more restrictive. This is primarily thanks to historical and
contemporary religious attitudes and commitments. In 2015, c. 70
percent of US citizens described themselves as Christian (Pew
Research Center 2015). Roman Catholics and Evangelical Christians
constitute the largest groups of these (c. 45 percent) – traditions that,
as shaped by Augustine and his doctrine of Original Sin, identify
women and sexuality as primary ethical problems (to put it politely).5
These folk are free to believe as they choose. But the difficulty for “the
rest of us” is that, as with other things internet and digital, much of the
discussion regarding pornography and digital media largely arose in
and was dominated by both popular and scholarly voices based in the
United States (Paasonen 2011, 427). Moreover, all of the major
platforms and communication venues through which our digital lives
– and thereby SEMs – flow are owned and operated by the US-based
corporations Amazon, Google, Facebook, Apple, and Microsoft. These
corporations have been slow (to put it politely) to recognize that their
US-based conceptions of sexuality and what counts as pornography*
are not universal. The result is “corporate censorship” of materials that
are widely agreed upon not to count as “pornography” – exemplified in
Facebook’s censuring the iconic picture of “the napalm girl,” 9-year-
old Kim Phúc, running naked in terror and pain from the napalm
bombing of her Vietnamese village (Levin, Wong and Harding 2016).
Questions of pornography* are thus deeply entangled not only with
matters of culture, but also with the ethical and political matters of
freedom of expression as further complicated by national and
international matters of corporate power, politics, and dominance –
what Dal Yong Jin has aptly called “platform imperialism” (2015).
This is not to say that a US-based scholar or corporate view is
automatically suspect: it is to say that such views – as with views from
any other cultural domain – tend strongly to be shaped by a specific
set of cultural backgrounds. The first point is to be aware of these
backgrounds – and how they vary from European through
Scandinavian to Asian, African, indigenous, and other traditions – in
order to avoid inadvertent dominance of one view and,
simultaneously, to raise our awareness of the role of our own cultural
backgrounds in shaping our own judgments and attitudes.
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS: GENDER,
SEXUALITY, CULTURE, AND PORNOGRAPHY*
1. In light of these initial comments and first ethical arguments:
(A) How would you characterize the prevailing attitudes and
judgments, both positive and/or negative, regarding bodies, women,
and sexuality in the country/culture you count most as your own?
(B) Insofar as your own judgments and attitudes regarding bodies and
sexuality may be different from the prevailing ones around you, can
you characterize these (at least for yourself, if not for your sister- and
fellow-students and/or instructors just now)?
(C) How would you define pornography*? Be careful here: given the
considerable diversity of SEMs “out there” (both online and offline),
you will want to start building a continuum of materials that would
either count or not count, in your view, as pornography – and then
what is for you ethically objectionable pornography.
For example, child pornography is all but universally condemned and
criminalized. But what about SEMs involving violence, such as rape or
torture – at the extreme, “snuff films” that depict the death of the
(usually female) object of sexual violence and torture? At the other end
of the continuum – what might be sexually explicit material that, in
your view, counts more as erotic art, not pornography? Finally: where
on the continuum is a line crossed into pornography – and then
ethically objectionable pornography?
(D) Given your definition of “pornography,” what are your personal
responses to it – including any ethical ones?
(E) Equally importantly: can you identify how far your own responses
to pornography are (in)consistent with the prevailing judgments and
attitudes regarding bodies and sexuality you describe above?
2. Given your responses to pornography (1.D, above), what arguments,
evidence, experiences, and/or other grounds can you offer to support
those responses? For example, you may want to argue that exposure to
pornography may be harmful for children and adolescents, as it might
foster both less than respectful attitudes toward young girls and
women, and understandings of sexuality that emphasize power and
exploitation rather than respect, equality, and mutual intimacy.
These sorts of arguments are common consequentialist or utilitarian
arguments and are frequently invoked in debates surrounding
pornography and its regulation. But you may well have other
arguments, etc., to offer.
3. Review the first two arguments sketched out above regarding
pornography and SEMs as:
(A) ethically objectionable because these materials objectify persons as
“just meat” and thereby deny them agency, autonomy, and equality,
vis-à-vis
(B) ethically defensible because these materials may contribute to the
(high) modern values of emancipation, autonomy, agency, and
equality, especially for those persons whose sexual identities and
preferences do not align with the preferences and identities dominant
in their culture.
Which of these two arguments do you find more persuasive – and
why?
4. Both of the arguments in (3) above are exemplars of deontological
arguments. That is, as Kant argued, human beings are primarily
autonomous, and thereby capable of rational self-rule. This means
that, far from being seen and treated as “just meat,” free human beings
must be allowed to determine their own ends or goals, rather than
serve as the means (“just meat”) to ends and goals imposed by others.
On this line of reasoning, free human beings thus have (near) absolute
rights, beginning with the right to respect from others and the right to
be treated as equals.
The debate here, then, is whether or not – and, if so, how and in what
ways – SEMs serve to enhance or degrade this core human autonomy.
In this light, one’s judgments and attitudes toward bodies and
sexuality become especially critical to the debate. A Scandinavian
feminist, for example, as someone who is inclined to regard sexuality
as normal and natural, may be more open to the view that SEMs can
work to enhance human autonomy. A conservative US Christian, by
contrast, may be persuaded that bodies and sexuality are implicated in
Original Sin and are thereby to be enshrouded in privacy, if not shame,
and so she or he is far more likely to view SEMs as only reinforcing
such already strongly negative views toward women and sexuality.
From this perspective, it is hard to see how they could thus work for
equality and the emancipation of women.
In this light:
(A) given that you endorse a deontological emphasis on human beings
as primarily free agents who must be respected and not treated as “just
meat,” how far do you find SEMs to be more likely to work:
(i) against emancipation and equality, and/or
(ii) for emancipation and equality?
(B) especially if you find yourself coming down strongly on the side of
either A(i) or A(ii), can you tell whether your response is consistent
with your personal and/or cultural judgments and attitudes toward
bodies, women, and sexuality?
Pornography*: More ethical debates and
analyses
These three major difficulties of pornography* and ethics underline
Susanna Paasonen’s warning at the opening of this chapter: “The
plethora of available online pornographies guarantees that virtually
any stance on porn can be backed up with multiple examples
supporting one’s argument” (2011, 432). To say this somewhat
differently, the still more recent dramatic explosion of diverse forms
and genres of SEMs online (and offline: Attwood 2018) thus makes it
difficult to move forward with any sort of ethical analysis and
arguments without first defining a focus: that is, which (relatively)
specific form(s) of pornography* do we have in mind?
One extensive UK survey showed that the largest number of both
women and men primarily visit so-called tube sites – i.e., the porn
equivalents of YouTube such as porntube. com. More recently, one of
these sites, Pornhub, provided a wealth of statistics on its visitors.
Considering the source, the significance and quality of these numbers
are to be taken with many grains of salt. Still, these statistics reinforce
Attwood’s account of growing diversities in sexuality and media. At the
same time, there is no shortage of variations of SEMs designed
primarily to arouse heterosexual males through a focus on women as
both the targets and active agents of male sexual pleasure. With this
last genre of pornography* as a starting point, we’ll now turn to three
different analyses and arguments that should be useful in their own
right and as providing examples and models for approaching other
mediated SEMs – and in the following sections on robots and games.
Pornography* online: A utilitarian analysis
At least in the English-speaking world, approaches to pornography*
online often follow utilitarian lines of argument. Classical liberals,
beginning with John Stuart Mill, defend freedom of speech and object
to censorship on straightforwardly utilitarian grounds. First, freedom
of speech is argued to lead to such positive consequences as individual
happiness and a flourishing society. By contrast, censorship is rejected
because of its many negative consequences, including inadvertent
suppression of what may be grains of truth in an otherwise suspect
claim or view (Warburton 2009, 22–31). (As Warburton points out,
Mill’s arguments are directed to freedom of expression and freedom of
speech. For pornography* to be defended on these grounds, it must
first be shown to count as speech or expression. For arguments pro
and con, see ibid., 60–4.) We can add to these considerations common
neoliberal objections to proposed internet regulation as being too
costly – as imposing unneeded costs and inconveniences on
governments, the corporations responsible for maintaining the
internet infrastructure, and users/consumers. On the other hand,
critics of pornography* argue that the production and consumption of
such materials are harmful to women – as well as children, especially
as trade in child pornography has apparently increased thanks to
rising access to the Dark Web. Indeed, for all of the debate regarding
the difficulty of demonstrating causality between consumption, on the
one hand, and attitudes and actions, on the other, a recent meta-study
of some 135 (English-language) studies flatly concludes:
Both laboratory exposure and regular, everyday exposure to this
[sexualized media] content are directly associated with a range of
consequences, including higher levels of body dissatisfaction, greater
self-objectification, greater support of sexist beliefs and of adversarial
sexual beliefs, and greater tolerance of sexual violence toward women.
Moreover, experimental exposure to this content leads both women
and men to have a diminished view of women’s competence, morality,
and humanity.
(Ward 2016, 560)
Again, while claims of causal connections must be viewed cautiously,
the upshot is a simple utilitarian calculus: do the possible costs and
other negative consequences of some sorts of restrictions on
consuming pornography* outweigh the possible benefits of such
restrictions – namely, reducing avoidable harms to women?
Part of our response here depends first of all on determining just what
the possible costs would be – in utilitarian terms, how many negative
utils would be generated by efforts at censorship or regulation? This
would depend in turn, of course, on just what sorts of efforts we have
in mind. For example, the UK has implemented an approach to
filtering SEMs called “active choice-plus.” Under this system,
customers signing up for internet access are confronted by their
Internet Service Providers (ISPs) with the choice to “opt-in” to various
levels of access to porn and other potentially harmful materials. That
is, the default setting is to exclude these materials, thereby requiring
those who want access to them to indicate as much. As might be
imagined, the ISPs involved complained of the expense of installing
and maintaining such filters, along with affiliated costs of developing
services for allowing customers to opt in to such sites. At least some
number of customers will also find the proposed necessity of taking
time and action to opt in to cost at least a few negative utils –
multiplied in turn by however many such customers there may be who
would want to opt in.
On the other hand: how many positive utils might be gained by a
potentially significant reduction in harms against women? For
example, among other impulses toward the development of active
choice-plus were the claims of MP Ann Coffey – namely, that there has
been a “surge” of sexual groping and manhandling of young girls in the
UK: around one-third of sixth-form girls have been targets. Coffey,
moreover, squarely blamed this rise of sexual aggression against
young girls on internet pornography* fostering “distorted” sexual
attitudes among teenage boys (Martin 2012). So: how many positive
utils can we assign – presumably a very large number – to the young
girls who would no longer be victimized in this way should stronger
blocks be placed on access to internet pornography*?
As discussed in chapter 6 (pp. 221–5), this example brings forward
three of the critical reasons why applying a utilitarian cost–benefit
analysis in practice is so difficult. The first question is: how far can we
be confident of our predictions of the outcomes of our possible
choices? That is, can we be confident of the predictions on either side
– whether of high negative and/or of high positive consequences of
imposing new controls on access to online SEMs?
Second: even if we could predict these outcomes with some degree of
certainty – how do we quantify costs and benefits beyond the
monetary costs involved? For example, how many negative utils
should we assign to a customer being required to take the time and
trouble to opt in to access currently available by default? How many
positive utils can we assign to a predicted reduction in sexual
aggression against young women? While the utilitarian approach
forces us to weigh the negatives and positives against one another, it
seems clear that at least some aspects of human experience, including
a sense of security against unwanted and unjustified aggression, defy
straightforward quantification. Hence, weighing pros and cons
becomes very uncertain indeed.
Third: recall the debates regarding causal linkages claimed to exist
between consumption of online SEMs and aggressive attitudes toward
women. While the evidence for these linkages may be better
established in more recent research, it’s always possible that new
experimental approaches may be devised in 5 or 10 (or 50) years down
the road that would provide us with more reliable evidence one way or
another. It is also perfectly possible that little to no further progress
along these lines will be made. In short: the future, and with it our
future knowledge, are, by definition, uncertain.
In the meantime, however, we have to make our judgments and
decisions nonetheless. The best we can do (so far) is to judge and
decide based on the best evidence we have (so far), but this still leaves
us facing the first two problems attending any utilitarian approach –
namely, uncertainty about predicting outcomes, and the very great
difficulty of quantifying outcomes in order to balance them against one
another in a cost–benefit analysis.
As we see in chapter 6, these sorts of limitations mean that
utilitarianism doesn’t always bring us very far in our efforts to grapple
with complex moral issues. And, precisely because of these sorts of
limitations, many ethicists argue we must expand our ethical decision-
making frameworks to include deontology and, often, virtue ethics.
This turn is exemplified in the next analysis.
“Complete sex” – a feminist/phenomenological
perspective
In her article “Better Sex” (1975), Sara Ruddick develops a fine-
grained phenomenological account of sexual experiences.
(Phenomenological analyses use carefully disciplined attention to
human experience as primarily embodied beings.) Her account offers
a much richer description of human sexual experiences than those that
focus on sex and sexuality as something involving only bodies.
These latter accounts derive from at least two sources. The first is a
kind of dualism – whether religious or philosophical – that makes a
strong separation between the person as a soul or mental agent, on the
one hand, and their body, including their sexuality, on the other.
These dualisms have predominated in Western traditions since at least
the time of Augustine, and are carried through into modern
philosophical thought in the profoundly influential work of René
Descartes ([1637] 1972). Beginning with William Gibson’s novel
Neuromancer, which invented the term “cyberspace” and defined it as
opposed to the world of “meat,” these dualisms predominated in 1980s
and 1990s understandings of “cyberspace” and virtual worlds as
radically different from our more ordinary, offline worlds (1984, 6;
chapter 4, p. 146). The second source is simple materialism – the view
that holds that human beings are fully reducible to the workings of
their solely material bodies, as described by and predictable through
the various natural laws of biochemistry, neurology, simple physics,
and so forth. On this view, there is no free human agent – only the
illusion of freedom. We really are “just meat” – no different in any
significant way from, say, dolphins, other hominids, or cows. For
many in the contemporary world, especially those raised in highly
secular societies in Northern and Eastern Europe, and some parts of
Asia, this view may seem common sense and unproblematic. Be aware,
however, that this view is rejected by most contemporary
philosophers, who opt instead for a position called “compatibilism.”
This view holds that “free will is compatible with [material]
determinism” (McKenna and Coates 2018). The trick here is to be a
compatibilist without being a dualist; it is not necessarily easy, but it
can be done.
Ruddick criticizes these dualistic understandings for two reasons.
First, they result in an account of sexuality that radically separates a
given individual’s sense of unique identity and distinctive selfhood
from “sex” as something that takes place solely between (more or less
interchangeable) bodies. Second, in doing so, such understandings
seem inevitably to lead to an ethically problematic account of sexuality
– namely, one in which individuals can use one another’s bodies only
as the means to satisfy their own desires. Ruddick does not think that
such understandings of sexuality are necessarily mistaken. But she
first argues that, from a phenomenological perspective, they are
incomplete. As we know from some of our most intense experiences –
such as playing sports – we do not feel or experience some sort of
mind–body dualism: rather, we enjoy these experiences so profoundly
and completely in part just because they involve an immediate sense of
unity between our selves (as unique and distinctive selves or agents)
and our bodies. (The German phenomenologist Barbara Becker [2001]
later coined the term “body-subject” [LeibSubjekt] to denote this
experience of being in the world as an individual in both mind and
body.) Ruddick does not argue that all our sexual experiences must
involve such direct unity or embodiment. Rather, she maintains that
those that do are morally preferable, first of all because our own
personhood and autonomy cannot be separated from our bodies in
such experiences. Specifically, to approach sexuality as embodied
beings – as individuals and moral agents who are our bodies,
especially as they are diffused with sexual desire – issues in what
Ruddick calls “complete sex,” a sexual engagement infused with
mutuality and reciprocal care and concern for each other. Such
“complete sex,” as inextricably interwoven and suffused with the
distinctive identities of the persons involved, thereby literally
embodies the felt uniqueness of the relationship with each other, along
with other feelings such as pride and gratitude – all of which reinforce
the status of the Other as an equal person, not a thing.
In contrast to more casual approaches to sex that treat any given body
as more or less interchangeable with any other, complete sex thereby
fosters the Kantian duty of respect for the Other as a person – that is,
precisely as an autonomous and unique person deserving fundamental
respect. In Kantian language, this means a person we must always
treat as an end in itself, never as a means only – i.e., as “just meat.”
Indeed, Ruddick’s analysis helpfully points toward what many of us
find most important in such experiences – namely, the sense of being
loved fully and completely, precisely as the unique body-subject that
we experience ourselves to be much of the time. On this basis, finally,
Ruddick argues further that complete sex fosters two additional values
– namely, the deontological norm of equality and the virtue of loving
(Ruddick 1975, 98ff.).
Many of my students, especially those who are more secular, find
Ruddick’s account valuable: it helps them make sense of one of their
primary moral intuitions about their sexuality and intimate
relationships. That is, these students (among others) are “serial
monogamists.” Contra an earlier sexual ethic that would limit sex to a
single partner over a lifetime, serial monogamists are happy to have
sexuality as part of some sort of close, intimate, and exclusive
relationship that will endure for some length of time, whether a few
weeks, months, or years. Once a given relationship is over, the serial
monogamist is perfectly free to take up a sexual relationship with
another person or persons over time. But generally, within a given
relationship, the intuition is that for one’s partner to “have sex” with
someone else amounts to some form of “cheating” or infidelity.
A consistent dualist, however, strongly separates body and sexuality
from personal identity – and thus from the ethical commitments and
norms associated with respect for persons as unique individuals. Such
dualism has difficulty justifying serial monogamy. A dualist must
regard sexual activity as simply one more activity of bodies as radically
distinct from their “owners” as individuals. So how can sexuality have
any connection with, for example, personal commitment to a romantic
relationship with another as somehow unique, distinctive, and thus
excluding sex with other bodies? Why should “sex,” if it’s simply a
matter of actions between two more or less interchangeable bodies, be
any more personal or exclusive than, say, shaking hands?
By contrast, many of my students find in Ruddick’s analysis a way of
accurately describing, first of all, their own experiences of being a
“body-subject” in at least their better experiences of sexuality in a
(serially) monogamous relationship. This experientially oriented,
phenomenological account further helps them make ethical sense of
their moral intuitions that, as serial monogamists, there’s something
ethically problematic about sex with someone else besides their
current partner – but without having to appeal to religious
frameworks they reject.
In short, Ruddick’s account brings forward:
(1) a deontological emphasis on treating one another as free and
unique persons, where the recognition of such autonomy requires
fundamental respect for one another as equals, and
(2) a virtue ethics emphasis on loving – that is, as the practice of
learning to treat one another as individuals worthwhile in themselves.
Both of these directly challenge experiences of sex and sexuality that
instead present a person as just body, as “just meat” – that is, as an
object that (not who) exists solely as a means to our own ends and
desires.
Ruddick does not directly mention pornography, but she does
comment that “Obscenity, or repeated public exposure to sexual acts,
might impair our capacity for pleasure or for response to desire” (1975,
102). This at least raises the question as to whether our enjoyment of
the sorts of pornographic materials described above – i.e., ones that
consistently depict women and children (and sometimes men) as
exclusively the targets and agents of fulfilling male pleasure and desire
– reinforces, and/or inclines us toward, adopting a dualistic attitude
toward body and sexuality that sees the sexual other as “just meat.”
Insofar as our answer to this question is “yes,” then we would have
reason to be cautious – perhaps very cautious – regarding
consumption of pornography of these sorts.
SECOND REFLECTION/DISCUSSION/WRITING QUESTIONS: ACCESS TO
PORNOGRAPHY* ONLINE
1. We’ve now seen three approaches to some of the ethical issues
evoked by easy access to online pornography*. Using the concrete
proposal discussed above – of implementing an “active choice-plus”
policy that would block access to online pornography* (as well as sites
promoting violence) by default, such that customers desiring access to
such materials would have to opt in for such access:
(A) What is your recommendation? That is, do you oppose or support
the implementation of such a proposal?
(B) Whatever your recommendation, what arguments, evidence, and
so forth, can you offer in support of your view?
(C) Given the arguments and evidence you offer, how far do these
follow primarily utilitarian, deontological, and/or virtue ethics lines of
argument?
(D) Perspective-taking: take up one of the ethical decision-making
frameworks you did not use. For example, if you found yourself
arguing primarily along utilitarian lines, shift your perspective to that
of a deontologist and/or virtue ethicist.
As Ruddick’s analysis exemplifies, just because you take up different
ethical frameworks does not mean that you will land with different
ethical conclusions. Rather, she shows how both deontological ethics
and virtue ethics raise important questions about how far
consumption of pornography* is an ethical good. In any case, by
taking up an alternative ethical framework and applying it to the
proposed “active choice-plus” policy, do you find yourself landing with
a conclusion that is the same as or different from your own
conclusions?
(E) Especially if you should find that the alternative framework leads
you to a different conclusion, you can now confront a still more
difficult meta-ethical question: what arguments, evidence, and/or
other kinds of warrant (including, for example, strongly positive
and/or strongly negative experiences and emotions) can you offer in
support of the ethical framework you prefer?
2. As we saw in the shift from the utilitarian approach to deontology
and virtue ethics, a key factor in such shifts is that a given ethical
framework or analysis simply doesn’t help us actually form reliable
judgments and/or choose in the face of a difficult decision.
Especially if you are not satisfied with any of the approaches and
outcomes that we have seen thus far – that is, they fail to capture your
own ethical intuitions and approaches in one or more ways – can you
articulate just what is missing and/or what is lacking here?
Sex with robots, anyone?
Similar ethical questions and considerations are brought home in
literally embodied ways by the emergence of sexbots. To be sure, the
(male) dream of “the perfect woman” as one’s own creation is as old as
Pygmalion, the ancient Greek sculptor who fell in love with a statue of
his own making. Aphrodite kindly made the statue come alive,
presumably making Pygmalion a very happy man (Ess 2017a, 98).
These dreams take on new life in early modernity, beginning with the
E. T. A. Hoffmann Gothic romance Der Sandmann (The Sand Man:
[1816] 1967; cf. Coeckelbergh 2017, 42f.). Perhaps influenced by Mary
Shelley’s slightly later Romantic novel – Frankenstein: or, the new
Prometheus ([1818] 1933) – robots entered the Western imagination
as often female, and then as almost always seductive and dangerous.
From “Maria” in Fritz Lang’s iconic Metropolis (1927) through Ava (a
conflation of Adam and Eve) in Ex Machina (2015), the image of what
Mia Consalvo has deftly called “the techno-femme fatale” (2004) is
rooted in especially Western Christian teachings – namely, the same
Augustinian dualisms, and thereby contempt for women, body, and
sexuality (not to mention, nature), that we saw grounding early
conceptions of “cyberspace” as a pure space of minds vs. “meatspace,”
etc. (Ess 2017a, 95–104; cf. Ess 1995).
Importantly, the Japanese reception of robots in general is
comparatively untinged by such dark concerns. This may reflect a very
different Japanese tradition of animism. As readers familiar with
animé and related Japanese traditions will recognize, contra a
Western dualism that believes living minds (and souls) are radically
separate from matter as “dead stuff,” animistic traditions assert that
all “things” about us are alive in some way. And so the dualistic gap
between life and matter (and potential dangers such a gap may pose)
is replaced by a continuum between more material robots and more
fully animate human beings. In both Japan and Western countries and
cultures, nonetheless, sexbots are clearly designed and marketed to be
perfectly compliant to their owner’s wishes. A primary ethical issue
emerges here – not only in their consumption and uses, but in their
very design: insofar as sexbots are overwhelmingly female, they
thereby inscribe and reinforce traditional attitudes of male dominance
and female subordination. It hardly needs saying that such attitudes
remain in full, often brutal, force in countries and cultures throughout
the world – including the Scandinavian countries, however much they
otherwise stand out as the most gender-equal societies in the
industrialized world (Holst 2017, 12–40).
When sexbots were still the stuff of science fiction, UK computer
scientist David Levy inaugurated contemporary ethical debates on sex
and robots with his Love and Sex with Robots: The Evolution of
Human–Robot Relationships (2007). We will see that Levy’s
arguments are very largely utilitarian. Some of the strongest
counterarguments to Levy’s great enthusiasm for sexbots have been
forcefully developed by Kathleen Richardson (2015): Richardson
argues much more from deontological and virtue ethics perspectives,
in hopes of stopping the production of sexbots altogether. Between
these two poles are middle grounds that we will then explore.
Utilitarian approaches. Levy (2007) forwards an extensive range of
primarily utilitarian considerations for what he sees as the sexbots of
a not-too-distant future: such increasingly human-like robots will
ostensibly entail economic benefits (from a growing industry) as well
as “the likely reduction in teenage pregnancy, abortions, sexually
transmitted diseases, and pedophilia” (ibid., 300). And, of course,
there’s simple physical pleasure, the core value for especially
Benthamite utilitarians. A little more carefully: Levy highlights sex as
desirable for the sake of pleasure, releasing tension and stress, the
pursuit of novelty, and escaping boredom (ibid., 187). At the climax
(pun intended) of the book, Levy enthuses over a great pleasure circle:
as new machine developments evoke new human desires, these will
stoke still further innovation, resulting in “great sex on tap for
everyone, 24/7” (ibid., 310).
Deontological and virtue ethics considerations. Levy does take up one
deontological consideration – namely, recognizing the rights of robots
as they become more independent (ibid., 98, 305, 309; cf. Nørskov
2016). Kathleen Richardson (2015) is one of Levy’s primary critics and
founder of the “Campaign Against Sex Robots”
(https://campaignagainstsexrobots.org/about). Broadly, Richardson
questions whether or not the utilitarian benefits Levy proposes will
actually come about. In addition, though she does not use the term,
one of Richardson’s central critiques fits within virtue ethics: she
emphasizes the importance of empathy, which she defines as “an
ability to recognise, take into account and respond to another person’s
https://campaignagainstsexrobots.org/about
genuine thoughts and feelings” (2015, 291). Part of her objection is
that, by refocusing our desires and sexuality onto sexbots as compliant
objects – i.e., devices that we purchase, turn on and off, sell off or
dispose of – we no longer are required to conjoin love and sex with
empathy. As we have seen (chapter 4: pp. 139–40), Shannon Vallor is
even more explicit about the central importance of empathy as a
virtue, as one of the most basic virtues (along with perseverance and
patience) for human communication, friendship, and intimate
relationships; these in turn seem to be prerequisites for good lives of
flourishing (Vallor 2009, 165f.). Specifically, then, to redirect our
sexuality – and, for Levy, love – to sexbots is thereby the loss of the
opportunity to practice empathy: this sort of “ethical deskilling”
thereby threatens to undermine the basic conditions for human
communication and flourishing (Vallor 2015). While Levy’s expanding
pleasure circle of “great sex on tap for everyone, 24/7” (2007, 310)
may be irresistible to pleasure-centered utilitarians, for virtue
ethicists, it risks becoming a literally vicious circle, i.e., one of
increasing “vice” or anti-virtue: in losing empathy, we only make
ourselves more like the machines we “have sex with.” (For more
extensive discussion of relevant issues and arguments, see Ess 2018b.)
REFLECTION/DISCUSSION/WRITING QUESTIONS
1. Again, one of the difficulties facing utilitarian emphases on
consequences – positive or negative – is just whether or not predicted
consequences, especially further in the future, can be counted on to
come about as promised. In particular, by 2018, “no empirical
evidence” had been found to support any of the positive benefits Levy
promised in 2007 (Davis 2018). Specifically, “rather than protecting
sex workers, the dolls might fuel exploitation of humans” (ibid.) – that
is, supporting one of Richardson’s key objections to Levy’s arguments
and sexbots more generally.
Review Levy’s list – and see if you can think of any additional positive
benefits of sexbots that he might have missed? And then, referring
either to Davis (2018) and/or the academic study she discusses (Cox-
George and Bewley 2018), add the more negative consequences that
might also well accrue. (As a fun wrinkle: how about the possibility of
your sexbot being hacked – and used to kill you? See Cuthbertson
2018.)
Using your best utilitarian calculus skills – what is the result? That is:
sexbots – yes, no, and/or maybe?
Discuss your analysis – including what you think are the one or two
most likely positive and negative benefits.
And: in your calculus, how did you determine how many positive utils
and negative utils to assign to the diverse possible consequences?
2. Virtue ethics approaches: loving your sexbot – while s/he is faking
it?
A. Levy has acknowledged that sexbots may well not have genuine
emotions and desires – but that this doesn’t matter: “if a robot
behaves as though it has feelings, can we reasonably argue that it does
not?” (2007, 11). Well, yes: in fact, the challenges of creating any sort
of real emotion or desire in an AI or robot are so complex that robot
and AI designers have long focused instead on “artificial emotions” –
namely, crafting the capacities of such devices to read our own
emotions and then fake an “emotional” response in turn.
John Sullins (2012) has argued that such devices, however satisfying
they might be on an emotional and physical level, are fundamentally
objectionable on a deontological level: to intentionally deceive humans
in these ways is to violate the respect for human beings owed to us as
autonomous and equal Others (ibid., 408).
On the other hand, it seems very likely that all long-term intimate
relationships involve at least occasional “faking it” – that is,
pretending to respond to the amorous desire of one’s lover with
approximately equal desire. These sexual equivalents of “little white
lies” may likewise be necessary components of human sociality – that
is, well-intentioned deceptions that may help our relationships work
more smoothly, or even flourish more fully in the long run (cf. Myska
2008).
But this raises the question of analogical argument. As you reflect on
the above:
(i) is the analogy between “little white lies” and unwanted sex (at the
extreme, marital rape) a good one? Why, and/or why not?
(ii) what about the analogy between a loving partner occasionally
seeking to please his or her lover by “faking it” – and a sexbot
intrinsically incapable of experiencing or expressing genuine emotion
and desire, and which (who?) thereby is constantly faking it?
Given your analysis of these arguments and analogies – do you agree
more with Levy that artificial emotion is enough, and/or with Sullins
that artificial emotion is an unacceptable deception?
B. Sara Ruddick’s distinctions between complete sex vis-à-vis good sex
are also helpful. A sexbot might well be capable of offering us good sex
– a pleasurable experience that might well have additional therapeutic
benefits (though this is empirically contested). On the other hand,
recall that complete sex requires a mutuality of (real) desire between
two autonomous beings: and that this mutuality is conjoined with
deontological norms of equality and respect. On this account, loving,
as entailing mutual desire, equality, and respect, is thereby a virtue,
i.e., a capability that requires practice over time.
(i) Given these conditions, would complete sex with a robot – as
lacking genuine consciousness, desire, and emotion – be possible?
(ii) Presuming your response is “no,” what does this mean for the
larger debates surrounding sexbots – for example:
as being intrinsically deceptive and thereby disrespectful of human
autonomy (Sullins);
as being capable of “good sex” – but not complete sex as Ruddick
accounts for it;
and thereby threatening an “ethical deskilling” – not only of empathy,
as Richardson argues, but of loving itself, both of which would seem to
be foundational to friendship, long-term intimate relationships,
parenting, and other relationships central to good lives of flourishing?
4. Meta-ethical issues
These debates turn largely on utilitarian arguments in favor of sexbots
vis-à-vis deontological and virtue ethics arguments against them, at
least in limited degree (good sex but not complete sex), if not
absolutely (Richardson’s campaign to stop sexbots entirely).
How far, then, is this actually a debate? That is, from a meta-ethical
perspective, the primary opponents – Levy and Richardson – are, in
effect, playing by different ethical rules. In a certain way, this means
that they are talking past one another. To use an analogy, it’s as if one
is arguing from the rules of rugby, while the other is arguing from the
rules of American baseball.
What to do about this meta-ethical problem? Minimally, it requires us
– you – to determine which of these ethical frameworks is more
compelling and foundational.
Some initial questions may help here:
Where do those previous responses lead you with regard to these
debates?
Are you still comfortable with your previous choices?
And/or: do your previous preferences – whether for utilitarianism,
deontology, and/or virtue ethics – lead to positions and conclusions
here that you do not find yourself in agreement with?
In the latter case, what do you do with the apparent inconsistencies at
these meta-ethical levels?
(Hint: you may have to change your mind …)
Now: What about games?
As with the development and diffusion of all sorts of pornographies via
digital media, so computer-based games have likewise dramatically
evolved and developed over the past three decades. The range of
games is staggering: while popular press reports (still) tend to focus –
because of their intense violence – on so-called first-person shooter
(FPS) games and massively multiplayer online role-playing games
(MMORPGs) such as Dota2, a descendant of the classic World of
Warcraft – the world of computer-based games runs the gamut from
dance and exercise games to serious or educational games designed to
achieve specific learning outcomes. The diffusion of games, as with the
diffusion of pornography*, has of course followed the diffusion of
mobile devices. The number of games available just for mobile phones
is sufficiently extensive and constantly changing as to constantly
require new guides – for example, to the top ten this week. … Along
the way, games and, again – in parallel with the pattern for
pornography* – game studies have grown from a relatively small field
in the early 2000s to an increasingly prominent and distributed
interdisciplinary set of academic fields, replete with dedicated
institutes, research centers, and a growing number of relevant journals
(Aarseth 2015). The world of professional gaming has likewise
developed remarkably over the past two decades, now encompassing
multiple leagues, professional teams with respectable salaries (as well
as health benefits and retirement plans) – so much so that
“competitive gaming is starting to look a lot like professional sports”
(Webster 2018). All this attests to the increasing cultural roles and
importance of digital games. (At the same time, it can be noted –
another marker of a post-digital era – that good old fashioned board
games have also been making a rather remarkable come-back: for
example, in 2016, sales in the US grew by 28 percent (Birkner 2017).)
This importance, perhaps, should not surprise us: game scholars and
researchers often hark back to the work of Johan Huizinga, who
famously named us Homo ludens – “[hu]man the player” ([1938]
1955). But, along the way, there have been casualties – or so, at least,
critics claim. For example, the Columbine (Colorado) killings in 1999
were linked with the killers’ affection for violent video games. A long
list of subsequent school shootings, both in the US and in Europe were
likewise linked to heavy use of violent games. Similarly, in the event
many Norwegians refer to simply as “22 July,” Anders Behring Breivik
killed 74 people and wounded 242 others, including 69 young people
simply shot down on the island of Utøya. Breivik acknowledged
playing games such as Modern Warfare 2 and World of Warcraft, in
part as “training” (e.g., Daily Mail Reporter 2012). By the same token,
James Holmes, dressed as the Joker, killed 12 people and injured over
60 others during the premiere of a new Batman movie in Colorado:
media reports were quick to allege the role of video games – even if in
a somewhat qualified fashion, as the quotation from Pat Brown at the
beginning of the chapter exemplifies (CNN 2012). Not surprisingly,
these claimed linkages are hotly contested – and not without reason –
by those who want to defend such games against media tendencies to
scapegoat both games and gamers. The issues we’ve examined above –
specifically, how to determine causal linkages between consumption
and use of such materials, and just what harms and/or liberations they
may foster (if any) – thus emerge here as well. Nonetheless, more
recent studies, including two “meta-studies” – studies that analyze a
collection of specific studies – seem to more solidly demonstrate at
least some causal effect – if relatively small (Moyer 2018; cf. Gentile et
al. 2014). Utilitarians in particular will need to pay close attention to
these studies – and the size and degree of effects – for their calculus.
At the same time, just as with the transformations of pornography*,
the range of ethical issues affiliated with computer-based games and
the sophistication with which those issues are taken up have likewise
developed in remarkable ways. A foundational contribution here is the
work of Miguel Sicart, whose 2009 volume The Ethics of Computer
Games develops an extensive and careful analysis of the game-player
as an ethical subject. Sicart draws on phenomenology (including the
work of Barbara Becker and her notion of the body-subject
[LeibSubjekt], discussed above, p. 187) and virtue ethics to argue that
game-playing requires game-players to “reflect critically on what we
do in a game world during a game experience, and it is this capacity
that can turn the ethical concerns traditionally raised by computer
games into interesting, meaningful tools for creative expression, a new
means for cultural richness” (2009, 63). Contrary, then, to the
common critiques of violent video games, Sicart sees in them critical
sites for the development of ethical judgment – Aristotle’s primary
virtue of phronēsis – since “players present moral reasoning, a
capacity for applying ethical thinking to their actions within a game,
not only to take the most appropriate action within the game in order
to preserve the game experience, but also to reflect on what kind of
actions and choices she is presented with, and how her player-subject
relates to them” (ibid., 101).
To be sure, there are multiple national and international efforts to
control and regulate games – somehow. The Entertainment Software
Rating Board (ESRB) in the US, for example, “assigns the age and
content ratings for video games and mobile apps, enforces advertising
and marketing guidelines for the video game industry, and helps
companies implement responsible online privacy practices”
(www.esrb.org/index-js.jsp). As with pornography*, video games are
vociferously defended on US First Amendment grounds – that is, as
invoking rights to free speech (e.g., Liptak 2011). The US hence
emphasizes such “self-regulating” approaches. The European
counterpart, PEGI (Pan European Game Information), has developed
an age-based rating system that is further refined by specific “content
descriptors”: violence, bad language, fear, gambling, sex, drugs,
discrimination – and, most recently “in-game purchases,” that is, a
warning that a game includes the possibility of spending money within
it (https://pegi.info/news/new-in-game-purchases-descriptor). Such
“self-regulation” or “co-regulation” approaches thereby minimize
governmental oversight in the (neoliberal) name of maximizing
consumer choice and industry engagement (Livingstone 2011b: 511ff.).
Other countries and cultures take a stricter approach. For example,
South Korea recognizes “game addiction” as a psychological problem,
unlike the American Psychological Association, for example (Hsu,
Ming-Hui, and Muh-Cherng 2009). Germany also takes game
addiction seriously: one prominent psychology professor notes that
the country has the strictest games regulation in the world as part of
its approach to “youth media protection” (Jugendmedienschutz)
(Lukesch 2012). Japan, as a last example, is both famous for the
diverse aesthetics brought to game design and (in)famous for games
such as RapeLay that focus on sexual violence against women.
Violence, however, seems difficult to avoid. Indeed, it is arguably
baked into not only a wide assortment of games – but within the
surrounding industries, cultures, and technologies as well. In 2012, for
example, Mia Consalvo called out what she identified as the “toxic
gamer culture” – a hostility toward women gamers both offline (at
conferences) and online, including harassment (and worse). All of this
exploded into more public consciousness with the “#Gamergate”
controversies of 2014, characterized as “a campaign of systematic
harassment of female and minority game developers, journalists, and
http://www.esrb.org/index-js.jsp
https://pegi.info/news/new-in-game-purchases-descriptor
critics and their allies” (Massanari 2017, 330; see Dewey 2014).
Massanari’s analysis is especially interesting, as it draws on a long-
term ethnographic study of how various aspects of Reddit.com’s
design and structure reflect, reinforce, and help amplify “anti-feminist
and misogynistic activism” (2017, 329).
These broader considerations are analogous to the larger backgrounds
and relationships we explored with regard to the Fairphone. A key
ethical question here is whether we take a more individualistic and/or
more relational approach to our ethical lives? As with the Fairphone, a
more individualistic approach might well reduce the relevance and
weight of these more global contexts: a more relational approach will
at least raise questions regarding our consumption of what, some will
otherwise argue, is “just a game.”
As with pornography*, out of all of this complexity we will take up only
a small set of the central issues – specifically those that parallel the
debates we have explored surrounding pornography* and sexbots.
INITIAL REFLECTION/DISCUSSION/WRITING QUESTIONS: DON’T GET
VIOLENT?
1. If you are a game-player, describe the game(s) you are familiar with
and play most frequently. If you are not a gamer, describe one or more
games you’ve watched others play regularly. Either way, what sorts of
habits or excellences are required in order to play these games
successfully? That is, what sorts of skills and abilities do they require
and foster?
2. Given the habits, skills, etc., that you identify above, can you use one
or more of the ethical frameworks we have explored to develop
arguments for the playing of such games? For example, you might
argue from a utilitarian framework that playing the game leads to a
clear set of benefits (e.g., relaxation, harmless pleasure, improvement
of certain skills, etc.) at a modest-to-negligible cost (e.g., the cost of
the game and required equipment, one’s time, etc.). Similarly, can you
use one or more of the ethical frameworks we have explored to develop
arguments against the playing of such games?
3. Once you’ve established – individually and/or as a group or class –
a set of arguments pro and con, how do you respond to the debate
here? That is, can you develop additional arguments, evidence,
reasons, etc., that would incline the debate toward one side or
another?
4. In the face of these diverse responses and perspectives on the
ethical dimensions of computer games, how do you respond?
In particular, do you respond to these contrasting claims and
perspectives as:
an ethical relativist
an ethical absolutist
and/or an ethical pluralist?
Explain and, more importantly, justify your response. That is, what
additional reasons, evidence, grounds, etc., can you give in support of
your meta-theoretical response to the first-level debates regarding
computer games?
Sex and violence in games
As in the discussions concerning pornography*, there is ongoing
debate as to whether or not what one does in a game – e.g., including
violent and/or ethically questionable sexual acts – has any effect on
one’s real-world attitudes and actions. For every new study that claims
to show some sort of causal linkage between game-play and players’
real-world acts and attitudes, there are vociferous attacks by defenders
of games and gaming – justified at least in part, as we saw in the case
of pornography*, because of the extensive difficulties of demonstrating
such causal linkages (Nash et al. 2015).
These and related defenses of what otherwise seem to be excessive
violence and violent sex in games can be captured in the phrase “it’s
only a game.” Such defenses argue, that is, that there are clear and
more or less impermeable boundaries between what happens in an
online and/or virtual game environment and what gamers do in the
rest of their largely quite ordinary lives.
Moreover, the debates we explored above regarding how far some
forms of pornography* may serve emancipatory and/or patriarchal
ends get replayed in the game context as well. In particular, as
suggested by the example of the Japanese game RapeLay, rape in
computer-based games is apparently as old as the games themselves,
beginning with the venerable Dungeons and Dragons role-playing
games first instantiated on computers in the 1970s and especially
popular in the MUDs and MOOs of the 1980s and 1990s. Julian
Dibbell’s famous “A Rape in Cyberspace” (1993) documented such
sexual violence – and further made clear that the presumed boundary
between the real and the virtual was not as solid or impermeable as
some wanted to think. Rather, while the sexual assaults targeted
against two of the avatars in LambdaMOO played out as bare textual
descriptions unfolding across the screens of the avatars’ real-world
owners (along with those of other members of the community looking
on), the sense of violation experienced by the real persons behind the
assaulted avatars was strong enough to evoke real tears. This, as
Dibbell points out, is the flip side of more consensual forms of virtual
sex: contrary to initial intuitions, he explains, virtual sex, despite its
restrictions to 900 lines of text, can be as intense as any real-world
encounters – perhaps even more so, “given the combined power of
anonymity and textual suggestiveness to unshackle deep-seated
fantasies” (Dibbell [1993] 2012, 30).
Imagine how much more powerful such experiences might be in more
contemporary sound and audio-enriched virtual worlds. For Clarisse
Thorn, in fact, one of the great advantages of contemporary games is
precisely that they can allow women – including feminists such as
herself – to explore their fantasies and alternative tastes. Specifically,
Thorn points to some evidence that around one-third of women report
rape fantasies, and so she argues that games and virtual worlds are
valuable places for feminists interested in BDSM (Bondage-Discipline-
Sadism-Masochism) (Thorn and Dibbell 2012).
On the other hand, Maria Bäcke’s interviews with “submissives” –
women who role-play as slaves to men as masters in the Second Life
community of Gor – suggest that such explorations may affect the
women in undesired ways back in the real world (Bäcke 2011). Lastly,
in her review of the game RapeLay, Leigh Alexander states simply:
“RapeLay relies on the horrendous, wildly sexist fantasy that rape
victims enjoy being attacked” (2009).
SECOND REFLECTION/DISCUSSION/WRITING QUESTIONS: IT’S ONLY A GAME?
Let’s begin by presuming that there is a clear line (at least for most
players) between our gaming experiences and our ordinary, day-to-day
lives.
1. Are there any ethical considerations that you can offer in either
support or critique of the experiences of violence and (violent) sex in
games such as Grand Theft Auto V, or its contemporary counterparts
as these may be familiar to you? Try to be clear here how far your
considerations draw consequentialist-utilitarian, deontological,
and/or virtue ethics perspectives.
2. Given some of the differences we’ve seen in cultural and national
backgrounds as affecting prevailing attitudes toward sexuality and,
now, the possible dangers as well as benefits of computer-based
games, are any of your responses above in keeping and/or in tension
with your own national/cultural background?
Especially if your responses are different from what we might expect
or anticipate for someone with your specific national/cultural
background, can you offer any reflection, set of experiences, etc., that
seem to you to have played an important role in shaping your views as
different from those surrounding you?
3. We’ve now seen – in the domain of both pornography* and
computer-based games – a central debate (primarily) within feminist
circles regarding whether exploration of diverse sexualities and sexual
tastes and preferences (including BDSM) serve
to help emancipate especially women from gender roles and
prescribed notions of sexuality that subordinate them to the power
and preferences of men – for example, as such materials and
experiences help women explore and determine for themselves
their sexual identities and preferences;
and/or
simply to reinforce their subordination and inequality – for
example, by endorsing claims that women enjoy rape as sexist
fantasies that portray them as not simply “just meat,” but as
enjoying such a status.
Again, both arguments agree on a central ethical norm – the especially
deontological emphasis on (near-)absolute respect for the autonomy of
persons. The debate is, in part at least, how far the sorts of narratives
found in alt porn or games such as RapeLay serve the autonomy,
especially, of women.
(A) Do you have (a) strong thought(s)/feeling(s)/intuition(s)
regarding this debate – that is, if forced to choose, about which side
you might take? If so, can you offer specific reasons, evidence
(including your own experiences, both positive and negative, if you’re
comfortable doing so), and/or other warrants that might support your
views on this debate?
(B) Given your views, do they support some sorts of restrictions on
such materials – for example, filtering software intended to prevent
children from accessing alt porn (and pornography* sites more
generally), national legislation and enforcement systems that would
rate games as appropriate to specific age groups – or no restrictions
whatsoever on such materials? Explain and justify your response as
best you can.
(C) Is there consensus or considerable diversity of opinion and
viewpoint on these matters in your class? Especially if there is
considerable diversity, can you and your class, perhaps with help from
your instructor(s), see any way(s) of moving forward toward resolving
these differences?
Recall that we’ve seen three sorts of meta-ethical responses to
profound ethical differences: ethical relativism, ethical
monism/dogmatism, and ethical pluralism. Are any of the differences
articulated in this exercise resolvable via some version of ethical
pluralism – if so, what would it look like? If not, then are you
comfortable with the remaining choices:
either a relativism, which would likely threaten the basic
deontological claim that human autonomy requires (near-)absolute
respect as a primary ethical value – i.e., one that is (more or less)
universal, not relative to a given culture or time;
or a monism/dogmatism, which would insist that only one view
can be correct, and any diverging views must be wrong?
4. What if it turns out that the presumed boundaries between virtual
game worlds and our everyday lives are not clear and solid? What
happens if – as especially virtue ethics approaches argue – what we do
in such game worlds does interact with our everyday lives, insofar as
we learn and practice in those worlds (as Sicart emphasizes, for
example) specific habits and, perhaps, attitudes?
Presume now – if only from a virtue ethics perspective – that there are
indeed crossovers between our online and/or computer-based gaming
experiences and our offline lives, practices, habits, and attitudes.
Given this presumption:
(A) Does it change any of your responses to the questions raised above
in 1–3? If so, which ones – and how?
(B) In particular, especially given this presumption, where do you
“draw the line” regarding which materials (whether specific forms of
pornographies* and/or specific sorts of games) are generally ethically
commendable (or at least ethically neutral) and which are potentially
harmful, to their consumers and players and/or to those around them?
For example, Sicart points out that:
For Aristotle, ethics and virtue are not something we have, but rather a
practice – one in which we can improve. Our goal as beings trying to
flourish as moral beings is to first cultivate the virtues and then
develop the practical wisdom that will allow us to make virtuous
choices in different situations. Similarly, playing games is a matter of
maturing our capacities to create the player-subject and its moral
reasoning.
(2009, 103)
From this perspective, a game such as Custer’s Revenge – which, if the
player succeeds in meeting its challenges, allows him to rape a tied-up
Native American woman – is to be ethically rejected. We learn nothing
in playing the game, that is, that helps us flourish as moral beings –
specifically, by way of cultivating specific habits and virtues, including
the better practice of phronēsis or practical wisdom. Similar
arguments would seem to hold for games such as RapeLay.
Recall that this does not mean for Sicart that all games involving
violence, including rape, are necessarily beyond the pale: rather, we
have seen him defend violent games such as Grand Theft Auto and
Super Columbine Massacre RPG!, as such games can foster the
practice of phronēsis or practical wisdom.
Using these examples as a starting point:
(i) Develop with your cohorts a continuum of games familiar to you.
(At the time of writing in the English-speaking world, a list of popular
games includes: Dota2, League of Legends, Fortnite, Player Unknown
Battlegrounds (PUBG), Red Dead Redemption 2, God of War,
Overwatch, Counter Strike: Global Offensive, Fallout, Assassin’s
Creed, Apex Legends, and Heartstone.)6 At the same time, consider
games such as RapeLay and Custer’s Revenge and/or their more
recent counterparts.
(ii) Using Sicart’s approach, can you identify which games indeed
seem to foster the development of important habits and virtues,
including the primary virtue of practical wisdom or phronēsis, and
which don’t? Insofar as you can do so, you would then have a way of
“drawing the line” between games that could be defended on ethical
grounds – even if they include striking levels of violence and violent
sex – and those that are on the other side of the line.
(iii) Given the line(s) that you and your cohorts draw, are you
comfortable and persuaded that this/these would be useful as (a)
way(s) of offering ethically informed advice to friends and family,
including younger folk, as to what games would be worth their while –
and which might not? Especially if you think additional considerations
need to come into play in offering such advice, articulate these as best
you can.
(iv) Insofar as you have managed to develop what appears to be an
ethically defensible set of lines regarding commendable and non-
commendable games, are you comfortable and persuaded that these
would further be useful as ways of developing legal guidelines for, say,
age-appropriate ratings of games and/or other forms of legislation and
regulation (including voluntary codes) on a national level (meaning,
first of all, your country and culture of origin) and/or at an
international level?
5. Where do we draw the line … as ethical consumers?
As we saw in the example of the Fairphone (chapter 4, pp. 153–6), a
further set of ethical considerations is posed by the larger relations
within which the production, distribution, and disposal of such
products take place. The Fairphone, and Fairtrade products more
generally, respond to increasing consumer awareness of these larger
infrastructures and a growing sense of responsibility for pushing them
in more fair and just directions through one’s purchases. We have also
explored Luciano Floridi’s account of distributed responsibility as an
ethical reality in a world in which we are all inextricably interwoven –
including with the workers (perhaps child slaves) and literally bloody
contexts which source critical components of our beloved devices. As
the earlier exercise suggested, part of our response here turns on our
conceptions of who we are as human beings – broadly, as more
individual, more relational, and/or somewhere in between as
relational autonomies.
And this last, of course, turns in part on the cultural contexts of our
origins and experiences.
To explore these dimensions of consuming and enjoying games
containing graphic sex and violence (often conjoined), consider the
following:
A. Reviewing the phenomena of #Gamergate, especially as elaborated
in further sources – is there a good analogy between, say, an
electronics industry that relies in some measure on conflict minerals,
child slavery, etc., and a game industry that seems marked, in some
quarters at least, by a “toxic masculinity” that occasionally leaks out
into real-world threats and harm against targeted women gamers,
designers, and journalists?
B. Presuming there is – to some degree – a good analogy here, then
what does that imply for your ethical choices regarding the
consumption and enjoyment of games?
Perhaps nothing … perhaps a lot … Either way, carefully explain your
thoughts and responses here.
C. At least part of your response will turn on your holding more
individualistic vis-à-vis more relational senses of selfhood (and/or the
middle ground of relational autonomy) – and thereby more individual
senses vis-à-vis more distributed senses of ethical responsibility.
Where do you seem to be on this scale – and does it indeed make a
difference in your choices here?
D. As we saw in the example of the Fairphone and the “Red” products
in Floridi’s examples, even if one holds to a more relational sense of
self and thereby a more distributed sense of responsibility, purchasing
a Fairphone or other Fairtrade product may not be fully mandatory,
but “supererogatory,” or a “Good Samaritan” choice. Presuming there
are games that, analogous to the Fairphone, are at least less ethically
questionable in terms of both content and the larger production and
consumption relationships that make them available to you, would you
want to argue that it would be ethically commendable, but not
necessarily always obligatory, to purchase and play these?
SUGGESTED RESOURCES FOR FURTHER RESEARCH/REFLECTION/WRITING
The Games Research Network listserv (https://listserv.uta.fi) is a
primary resource in the community of researchers for posting new
publications, conferences, etc. This will serve at least as a starting
point for research projects.
Christopher A. Paul (2018) The Toxic Meritocracy of Video Games:
Why Gaming Culture Is the Worst. Minneapolis: University of
Minnesota Press.
Paul argues that much of what is ethically (and socially) objectionable
about gaming culture can be overcome by following the examples of
more established professional sports.
Mikkola, Mari (ed.) (2017) Beyond Speech: Pornography and
https://listserv.uta.fi
Analytic Feminist Philosophy. Oxford University Press.
Chapters will take you further into the finer details of, and
contemporary views on, feminist approaches to pornography – if from
within the somewhat circumscribed domains of analytic philosophy.
Paul G. Nixon and Isabel K. Düsterhöft (eds.) (2017) Sex in the Digital
Age (London: Routledge).
This takes up a wide array of topics, several of which intersect with the
considerations in this chapter regarding pornography and sexuality,
violence, and robots (as an extension of the long tradition of sex toys).
John Danaher and Neil McArthur (eds.) (2017) Robot Sex: Social and
Ethical Implications (London: MIT Press). An excellent collection of
contributions that offer more fine-grained analyses of diverse
dimensions of robot sex.
John Sullins (2017) Robots, Sex, and Love, pp. 217–43 in Anthony
Beavers (ed.), Philosophy, Macmillan Interdisciplinary Handbooks.
Farmington Hills, MI: Macmillan Reference.
Sullins, a long-time explorer of these domains, provides a highly
accessible overview of both contemporary sexbot technologies and
companies, and an extensive exploration of the many philosophical
perspectives and arguments. The chapter is especially valuable for its
use of the Platonic understanding of eros – a far richer, but also far
more demanding, conception than “just sex” – as a way of analyzing
the physical and ethical benefits and limits of robot sex.
Notes
1 Borrowing from Grodzinsky et al. (2008), and keeping in mind
Susanna Paasonen’s admonition (2011) above, I use
“pornography*” – i.e., with an appended asterisk – to signal that
this term is intrinsically ambiguous and open to a wide range of
interpretations. The intention is thereby to remind us that we
always need to specify more precisely what we mean when speaking
of pornography*, rather than uncritically assuming that the term is
obvious and unambiguous.
2 Again: the whole complex of our lives as meaning-making and
relational beings, thoroughly informed by our co-evolving
technologies (Verbeek 2017; cf. Coeckelbergh 2017).
3 I am grateful to Elisabeth Staksrud for making this data from the
EU Kids Online 2018 survey available in preliminary form.
4 See the further discussion of contemporary feminism in chapter 6,
p. 257, note 4.
5 The doctrine of Original Sin is historically associated with
patriarchal control of women: as the doctrine lays the responsibility
for the introduction of sin and death into the world upon Eve, it
thereby works to demonize women, the body, and sexuality. This
interpretation of the second Genesis creation story (Genesis 2.4–
3.2), while orthodox in Western Roman Catholicism and
subsequently among some Protestant reformers, is directly contrary
to earlier Christian and Jewish readings of the text, which
emphasize instead the positive nature of Eve’s choice: acquiring
“the knowledge of good and evil” is specifically understood as the
attainment of the distinctively human capacities of moral
understanding and free choice – capacities that, in turn, early
Enlightenment thinkers such as John Locke see as foundational to
arguments for democratic polity – i.e., the political arrangements of
human beings capable of rational self-rule (Ess 1995).
6 My very great thanks to Mia Consalvo, Rikke Toft Nørgård, and
Joshua Ess for these suggestions.
CHAPTER SIX
Digital Media Ethics: Overview, Frameworks,
Resources
Morally as well as physically, there is only one world, and we all have
to live in it.
(Midgley [1981] 1996, 119)
Chapter overview
This chapter provides especially those new to ethics with an overview
of the most commonly used theoretical frameworks for ethical analysis
and decision-making. We begin with (1) utilitarianism and (2)
deontology. We then explore (3) important meta-theoretical
frameworks of ethical relativism, ethical absolutism (monism), and
ethical pluralism: these frameworks shape three critically different
ways of interpreting what ethical differences may mean – beginning
with cross-cultural differences in ethical norms and practices – and
thereby how we can respond to these differences. We then turn to (4)
feminist ethics and ethics of care, (5) virtue ethics, (6) Confucian
ethics, and (7) African perspectives.
These theoretical and meta-theoretical frameworks constitute our
“ethical toolkit” – a collection of important but diverse ways of
analyzing and attempting to resolve ethical problems. Part of our work
as ethicists is learning how to apply a given theoretical framework to a
specific issue; and given the diversity of possible theoretical
frameworks, we must also determine which frameworks are best
suited for confronting and resolving specific ethical issues. The meta-
theoretical frameworks of relativism, absolutism, and pluralism help
clarify and guide these determinations.
A synopsis of digital media ethics
Much of the ethical reflection on digital media – most especially, on
the ethical dimensions of information and communication
technologies (ICTs) – arose alongside the technologies themselves. But
this means that, until the last two decades or so, most of the discussion
and reflection on digital media ethics took place primarily within
Western countries, utilizing primarily Western ethical traditions and
ways of thinking. To begin with, there is widespread agreement
(Bynum 2000; Stahl, Timmermans, and Mittelstadt 2016, 3) that
Norbert Wiener’s The Human Use of Human Beings: Cybernetics and
Society (1950) stands as the first book in computer ethics. For over
two decades, “computer ethics” was the concern of a very small group
of professionals – principally computer scientists and a few
philosophers. “Computer ethics” as its own term emerged only in the
1970s, mainly through the work of Walter Maner, but also manifest,
for example, in the first professional code of computer ethics of the
Association for Computing Machinery in 1973 (and subsequently
revised – most recently in ACM [2018]). The introduction of the
personal computer (PC) in 1982, however, began a dramatic expansion
of the role of computers and computer networks into the lives of “the
rest of us” – i.e., those of us who are not computer scientists or other
sorts of information professionals, such as librarians (see Buchanan
and Henderson 2008). Following the emergence of the internet and
World Wide Web in the lives and awareness of most people in the
developed world in the early 1990s, a number of savvy observers began
to predict (rightly) that by the beginning of the twenty-first century,
information and computing ethics (ICE) would become a global ethics
– i.e., a domain of ethical issues, debate, and possible resolution, of
concern to more and more people representing an increasingly global
diversity of cultural norms and ethical and religious traditions (see
Paterson 2007, 153). In fact, what is called “intercultural computing
ethics” has been underway in ICE since the 1990s (Capurro 2005,
2008; Ess 2005; see Bielby 2015 for an overview).
Along the way, an important meta-ethical debate has emerged – and
frequently arises again among those new to these now long histories.
Briefly, will ICE, especially as it becomes globalized, require: (a)
largely a continuation of traditional ethics, but now applied to new
problems; or (b) a radical transformation of ethical thinking, as
constantly evolving ICTs introduce us in turn to radically new ethical
difficulties (see Bynum 2000; Tavani 2013, 9–12)? As is often the case,
the eventual responses to such either/or possibilities rather constitute
a “both/and”: that is, both (a) and (b) are correct. On the one hand,
there may well be specific instances that point toward the need for
distinctively new approaches (Braidotti 2006). On the other hand,
there are very many examples of how “everything old is new again”
(Ess and Hård af Segerstad 2019). That is, despite often dramatic
technological transformations, coupled with our ever-evolving and
sometimes striking new practices, the familiar ethical frameworks and
approaches continue to work quite well in many instances. For
example, deontology and virtue ethics are central ethical pillars in
European Union philosophy and policy developing around the
emergence of AI and the Internet of Things (Burgess et al. 2018;
Floridi et al. 2018). Virtue ethics and deontology, along with
utilitarianism, are likewise core frameworks in the development of
“ethically aligned design” by the IEEE (Institute of Electrical and
Electronics Engineers), the largest professional and standards-setting
organization in the world (https://ethicsinaction.ieee.org).
For us, the point is to be aware of this larger meta-ethical question and
debate as we go along. Our reflections and responses to this question
will affect (and be affected by) our ethical reflections regarding other
digital media – including our basic conceptions of selfhood, as ranging
from more individual through relational autonomies to largely
relational, as these in turn interact with our background cultures.
Basic ethical frameworks
As we have seen in the opening chapter, “doing ethics” involves much
more than a kind of “rule-book” approach – i.e., picking a set of
principles, values, etc., and applying these in a largely deductive,
algorithmic manner to a problem at hand. Rather, our central ethical
difficulties are difficult largely because they require us first to
determine which principles, values, frameworks, etc., best apply to a
given problem – a determination that Aristotle attributed to phronēsis
or reflective judgment. Developing such judgment requires our
ongoing effort to analyze and reflect on both familiar and new
experiences and problems. The good news is that our ethical
judgments – at least, if we consciously seek to develop them in these
ways – generally do get better over time. The daunting news is that
developing such judgment is a lifetime’s work, one that is never
complete or final.
In point of fact, as an acculturated member of a culture and society,
you already have a reasonably well-developed body of experience and
practice with ethical analysis and judgments. The following will simply
enhance the ethical toolkit you already have developed, by articulating
https://ethicsinaction.ieee.org
some of the most central frameworks for ethical reflection, both
Western and then non-Western ones.
REFLECTION/DISCUSSION/WRITING EXERCISE: A STUDENT DILEMMA
It’s Wednesday evening, and you’re packing up some books and notes
to take over to a friend’s apartment. You have different majors, but
you are both in the same section of a required course – and tomorrow
is one of two exams given during the semester; your grade on the exam
will count towards 40 percent of your final grade in the course.
For you, the course is not so hard, but your friend is really struggling.
You’ve promised to help her study this evening; you both need to get a
good grade on the exam and in the course to keep your grade point
average at the level required for your scholarships.
Just as you’re walking out the door to go to your friend’s apartment, a
good friend calls you up and says that he and some of your buddies are
at the local pizza place, having dinner and some beers. They’d really
like you to come on over, in part because you owe them a round or two
of drinks from the last time you got together. What do you do?
1. Utilitarianism
Most students in my experience approach this sort of problem in a
consequentialist – perhaps even a utilitarian – way. That is, they will
begin to figure out the costs and benefits of (1) turning down their
buddies for pizza and beer, vs. the costs and benefits of (2) fulfilling
the promise to help a friend study. One of the chief advantages of this
approach is that we can set up a handy table to help us keep track of
the positives and negatives. An initial analysis of our choices might
look like the table on p. 220.
But, of course, there are additional positive and negative consequences
of our choices that may seem relevant to our decision: e.g., if I help my
friend, she will do better on her exam (and, most likely, so will I); if I
go to have pizza and beer, I will certainly have a good time this evening
but probably not do so well tomorrow in the exam. If we think further
down the road, it may be that doing well in this exam will turn out to
be a “make-or-break” event with regard to our success in the course:
that is, should we both do well, we might subsequently end up with a
better grade in the course; but, if we don’t, then we might end up with
less of a grade than we need in order to maintain our grade point
averages for our scholarships, etc. The possible consequences even
further down the road might be enormous – ranging from doing well
in school more generally, moving on to a good job, etc., to (worst-case
scenario) losing needed scholarships, thereby being unable to
complete school, thereby failing to be able to find a good and satisfying
job, etc.
You get the point. For the consequentialist, the game of ethics is about
trying to think through possible good and bad consequences of
possible acts, and then weighing them against one another to
determine which act will generate the more positive outcome(s).
Consequentialist
analysis
Possible actions
Fulfill promise – study
with friend
Break promise – enjoy
pizza and beer
Costs
(negatives)
Will miss a nice
evening with friends
…
…
Will disappoint a friend
who’s counting on your
help
…
…
Benefits
(positives)
Will be able to help a
friend in an important
way
…
…
Will enjoy a nice evening
with friends
…
…
Strengths and limits
Consequentialism is certainly a tried-and-true approach to ethics: it’s
at least as old as Crito’s efforts in the dialogue named after him to
persuade Socrates to break out of jail and thereby avoid execution by
the Athenians. And especially in its utilitarian form – i.e., as developed
in the modern era by Jeremy Bentham and further elaborated by John
Stuart Mill, both of whom argued that we must pursue those acts that
bring about the greatest positive consequences (pleasure) for the
greatest number – the consequentialist approach has come to
dominate ethical decision-making and is especially characteristic of
especially in the United States and the United Kingdom (e.g., Stahl
2004). Certainly, there are many cases in which consequentialism will
do what we want an ethical theory to do – i.e., to help us determine
which is the better choice of two (or more) possible actions.
But, as this example also suggests, consequentialist approaches face
serious limitations. (We will also see this to be true of every other
theory we examine: after we have reviewed all the theories under
discussion here, one of our questions will be to see whether we can
discern which theory – or, perhaps, which combination of theories –
seems more sound, useful, justifiable, etc., than its competitors.) In
my view, there are three important such limitations.
(a)How do we numerically evaluate the possible consequences of
our acts?
In simple cases, this is not a problem. Either I go to get a new bus pass
or I face walking to school on a cold winter day. Either I pay my phone
bill or I find myself out of touch with friends and family, along with
the loss of internet access more generally.
But the hard cases are hard in part as it’s not always clear how we are
to weigh the possible outcomes of one act against another.
Bentham famously thought that all possible consequences, as some
form of pleasure or pain, could be evaluated in terms of their intensity
and duration – for example, as part of a “hedonic calculus” (Sinnott-
Armstrong 2015). Several nineteenth-century economists attempted to
develop this calculus into a strictly quantitative one by introducing the
notion of a “util” as a unit for measuring pleasure or pain (e.g., Sigot
2002). Ethical decision-making would then be a strictly arithmetic
matter of adding up positive and negative utils.
But what if not everything can be measured solely in terms of pleasure
or pain? What number of utils do we assign to an evening with friends,
enhanced by the pleasures of food and drink? What number of utils do
we assign to breaking a promise to a friend, coupled with the
knowledge that our breaking that promise may lead to further,
perhaps very serious, consequences (= negative utils) for our friend?
Despite centuries of effort, however, it is very challenging indeed to
establish in practice a relatively standard or quasi-objective scale of
pleasure and pain – physical and/or psychological – that we can thus
neatly quantify in terms of utils for such a hedonic calculus. But
everything in consequentialism turns on assigning relative weights to
given consequences: without some sort of agreed-upon scale or table
of utils to draw on, consequentialism is paralyzed at the outset.
Moreover, as we will see shortly, deontologists argue that some aspects
of human existence cannot be assigned quantitative values: some
things, some of us believe, are beyond measure. And, for such
elements, both consequentialist approaches in general and
utilitarianism in particular (again) have no ethical legs to stand on:
without a universal and consistent schema of positive and negative
utils with which to make our calculations, the arithmetic at the heart
of consequentialism cannot proceed. Moreover, in this case, for the
deontologist, a promise is a promise; it thereby entails a (near-
)absolute obligation. Breaking a promise, however much pleasure the
promise-breaker might get as a result of doing so (starting with
opening the door to pizza and beer), is still wrong.
(b)How far into the future must we consider?
Ethicists distinguish between short-term and long-term
consequentialists. In this example, a (really) short-term
consequentialist would consider only the consequences of his or her
acts over the next few hours. For most of us – at least, if we’re not
allergic to gluten and if our religion or physiology does not forbid
alcohol – pizza and beer with friends would generate more positive
utils than studying for an exam (presuming, that is, that you really do
not like the subject, etc.). By contrast, extending our timeframe by 24
hours might radically change our decision: whatever the positive utils
of pizza and beer, they might well not outweigh the negative utils of
letting down a friend and then watching as both of us do poorly in an
important exam.
And so on. It’s not inconceivable that, in 20 or 30 years, you and your
friend might look back on this exam as a key moment in your lives –
one that led (in the best of circumstances) to further academic and
thereby vocational success, or (perish the thought) to academic failure
and a lifetime of mediocre and unsatisfying jobs. The difficulty is:
consequentialists and utilitarians do not appear to have a satisfying
justification for telling us where in time to draw the line – the point
after which we no longer need worry about the outcomes of our
choices. But depending on where we draw this line can make all the
difference in our calculations.
As this last point suggests, there’s a second difficulty wrapped into the
problem of how far into the future we must attempt to consider: the
further into the future we seek to predict, the less reliable our
predictions can be. And yet, some of those future consequences may be
some of the most important for us in our lives. Worst case: the chances
of realizing what may potentially be the most decisive consequences of
our acts become increasingly (perhaps vanishingly) small the further
into the future we seek to predict those consequences. (In my
experience, much of the anguish we face in ethical decisions turns on
our effort to approach them in a consequentialist fashion – only to
realize that we cannot be very certain at all about some of the most
important possible outcomes of our actions.)
(c)For whom are the consequences that we must consider?
The pizza and beer example takes into account only a small number of
people. Bentham and Mill, by contrast, argued that consequentialism
would work for whole societies. Up to a point, at least, this is plausible.
In wartime, for example, generals and political leaders think in clearly
consequentialist terms. Choosing to drop the atomic bombs on
Hiroshima and Nagasaki, for instance, were relatively easy decisions
for the Allied commanders. Dropping these bombs immediately cost
something like 200,000 Japanese deaths – but, as hoped, it put an
end to the war. A conventional land invasion was estimated to result in
around 500,000 Allied soldiers’ deaths (and at least as many Japanese
soldiers). At a simple assignment of one positive util per life:
to use atomic weapons: 500,000 + / 200,000 – = 300,000 + utils
not to use atomic weapons: 200,000 + / 500,000 – = 300,000 –
utils
But what about the impact of using these weapons on those who
continued to live (and die) in areas contaminated by radioactive
fallout? What about the impact of using these weapons on the larger
ecosystem? On future generations?1
Attempting to take these possible consequences into account clearly
makes the calculation much, much more complicated. Again, part of
the problem is attempting to determine how far into the future we
must predict relevant consequences. But the further problem is: where
do I draw the line with regard to consequences affecting what group of
persons / living beings / non-animate entities? As I hope is clear,
where I draw that line can make an enormous difference in the
possible consequences of an act – and, thereby, how I decide which of
two (or more) competing choices I should pursue.
In particular, as digital media radically extend the range of the
possible consequences of our actions (as dramatically illustrated in the
example of the cartoons of the Prophet Muhammad; Debatin 2007),
the question of “consequences for whom” becomes central. Unlike
commanders in war, we cannot simply assume that the consequences
of our actions are limited to the citizens of a given nation-state.
In the face of these sorts of difficulties and limitations, many people
find that they cannot rely on consequentialism alone. They may want
to retain consequentialist approaches for certain sorts of decisions –
for example, when it is possible to make reasonably reliable
predictions about the possible outcomes of our choices or when it is
reasonably clear who will be affected, and within a specified
timeframe. But, especially when this sort of insight and information
are not available, they may turn to one or more of the following ethical
frameworks.
2. Deontology
For deontologists, what stands out in our opening example is that you
have made a promise. And promises – along with, say, notions of basic
rights and duties – have a (near-)absolute quality to them: they cannot
be overridden by considerations as to how much pleasure (or pain)
might be gained (or avoided) by violating them.
Religiously grounded forms of deontology are perhaps most
immediately familiar to contemporary Westerners. For example, if I
am a Jew, Christian, or Muslim, I believe that God has given us
specific commandments and laws which define right and wrong for me
– no matter what the consequences. So, negatively, I am commanded
not to murder, not to lie, not to covet my neighbor’s property, not to
commit adultery, etc. Positively, I am commanded to love God and my
neighbor – the Golden Rule that appears to be a universal, in fact.
Hence, a religiously grounded deontologist would believe that it is
wrong to lie – even if, by lying, he or she might be able to gain
significant material reward.
As a still stronger example: religious pacifists – whether rooted in
Judaism, Christianity, or some forms of Buddhism – take the
sacredness of life (all life for the Buddhist, not just human life) as an
absolute. Hence, for pacifists, killing other human beings (and, for
many Buddhists, any living thing) is always wrong – no matter what
the consequences. Such pacifists would not only reject the
consequentialist thinking, for example, behind the decision to use
atomic weapons in World War II; they would further reject the use of
violence against others even in self-defense. For the religious pacifist,
killing another is always wrong, no matter what the consequences –
including the possible consequence of losing one’s own life.
Non-religious consequentialist considerations can also support
pacifism and/or conscientious objection more broadly. Socrates, for
example, argues in Plato’s Republic and Crito that doing violence or
harm to another leads to an unacceptable form of literal self-
destruction. Harming others is argued to work contrary to the central
ability of reason to discern the good, and the ability (virtue) of
judgment (phronēsis) to determine how to enact the good appropriate
to specific contexts and circumstances. To work contrary to these
functions of reason and judgment in turn runs the risk of degrading –
perhaps ultimately paralyzing or destroying – these central abilities.
And, if we degrade or destroy our ability to discern the good and judge
what it means, we will thereby lose our ability to make the judgments
needed to pursue a genuinely good life of contentment (eudaimonia)
and harmony. Failure to achieve these, finally, makes our lives no
longer worth living. Hence, the just or good person will never harm
another, no matter what sorts of other gains such harm might bring,
because to do so risks making life no longer worth living (e.g.,
Republic 335b–335e).2 However we understand the pacifism of Jesus
and the early Christian communities, Gandhi and Martin Luther King,
Jr., built on these Socratic and Christian roots (and, for Gandhi, the
Buddhist virtue of ahimsa – nonviolence) to argue and practice
nonviolent protest against unjust laws. Such nonviolence was intended
not only to prevent harm to the selves or souls of its practitioners (one
consequence) but also to awaken the conscience of the larger
community (a second consequence), in hopes that the larger
community would come to see the injustice of its behaviors, laws, etc.
(consequence 3) and then replace these with more just ones
(consequence 4).
But there are also rationalist deontologies – articulated most
importantly in the modern era by Immanuel Kant (1724–1804). Kant
is famous for the Categorical Imperative (CI). In contrast with a rule-
book ethics, the CI marks out a procedural way of determining what
actions are right. The first formulation of the Categorical Imperative
states: “So act that the maxim of your will could always hold at the
same time as a principle establishing universal law” (Kant [1788] 1956,
31). One of Kant’s own examples from the Foundations of the
Metaphysics of Morals ([1785] 1959) helpfully illustrates what this
means. Consider the possibility of needing to borrow money –
knowing full well, however, that you will not be able to repay the loan.
You also know that, in order to get the loan, you have to promise to
repay it (duh). Question: can you make what you know to be a false
promise in order to secure the loan? For Kant, the maxim of this
action would be: “When I believe myself to be in need of money, I will
borrow money and promise to repay it, although I know I shall never
do so.” But the Categorical Imperative requires that we ask: “How
would it be if my maxim became a universal law?” (ibid., 40).
This might well remind you of your parents asking you in high school:
what if everyone did that? But, for Kant, what is at stake in this
question is whether or not the larger social order that would result
from everyone following the maxim of “make a false promise when it is
convenient to do so” would be coherent – or logically contradictory.
On Kant’s analysis, attempting to universalize this maxim would
become self-contradictory:
For the universality of a law which says that anyone who believes
himself to be in need could promise what he pleased with the intention
of not fulfilling it would make the promise itself and the end to be
accomplished by it impossible; no one would believe what was
promised to him but would only laugh at any such assertion as vain
pretense.
(Ibid., 40)
Simply: if we knew that everyone would lie when convenient (the
result of universalizing the maxim of our action), then we would never
know when someone was telling us the truth. But a world in which we,
by default, cannot trust one another to make promises in good faith –
that is, to tell the truth when we promise one another, for example, to
repay a loan – would be a world in which promises thus lose their
meaning. Specifically, in this case, attempting to lie in order to acquire
a loan I have no intention of repaying becomes self-contradictory: if
everyone does it – that is, allows himself or herself to perform the
same act (the result of universalizing the maxim at work here), then no
one would accept my promise at the outset. But if I cannot universalize
lying in this way – that is, make it a universal law acceptable for
everyone – then for Kant it is wrong, even when it seems convenient or
important. Again, it is always wrong, no matter what the
consequences.
In our case, a Kantian analysis would ask the question: what sort of
social/moral order would result if everyone were to break a promise
whenever doing so would result in at least more immediate, short-
term pleasure? Again, the result would be that we would never be able
to trust anyone’s promise – which would make promise-making self-
contradictory and meaningless. Hence, breaking a promise is always
wrong – no matter what the consequences.
Finally, Kantian deontology undergirds the widely shared belief that
there are ethical absolutes such as human rights. The discussion and
literature on rights is largely modern: so Thomas Jefferson, inspired
by John Locke, insisted in the Declaration of Independence, “We hold
these truths to be self-evident: that all men are created equal; that they
are endowed by their Creator with inherent and unalienable Rights;
that among these are Life, Liberty and the pursuit of Happiness”
([1776] 1984, 19). The belief in human rights inspired the American
and French revolutions – and, subsequently, many of the political
transformations that define modern Western states. Most basically, if
human beings are free (in Kant’s language, autonomies), we must be
recognized as equal and deserving of respect – that is, not slaves, not
“just meat.” Early Western and then more global struggles for
establishing and expanding emancipation and equality, including
nineteenth-century abolitionist and women’s suffrage movements,
turned centrally on these conceptions and arguments.
Indeed, the belief that rights are absolutes that must be recognized
and protected is not simply a Western phenomenon. In 1948, the
United Nations issued its Universal Declaration of Human Rights – a
document that goes well beyond what some scholars call the first-
generation or primarily negative set of rights articulated by Locke and
Jefferson, to include second-generation or positive rights – for
example, the rights to education and health care. These rights have
been realized, for example, as duties of the state in Western Europe
and Scandinavia, while the right to health care remains hotly disputed
in the United States. In any event, a deontological notion of basic
human rights has driven much of the political activism and
transformation in modernity, both within and beyond the boundaries
of “the West,” as Gandhi in India, and multiple other liberation
movements, globally exemplify.
To be sure, these claims to univeralism have been critically challenged
by feminists, and postmodernist and post-colonialist scholars (among
others). These critiques must be acknowledged and evaluated: but, as
with contemporary feminisms,3 the complexities are beyond the
bounds of this general introduction. At the same time, some of these
critiques have become more subdued in light of subsequent
developments. For example, we’ve seen contemporary feminists seek
to preserve Kantian notions of autonomy and thereby rights – however
importantly these are modified, for example in terms of relational
autonomies – precisely for the sake of sustaining a central ground of
argument for women’s equality, respect, and emancipation. At the
same time, recent work in cross-cultural psychology offers extensive
empirical evidence for shared values and norms across cultures,
thereby arguing again in the direction of some form of universalism
(e.g., Schwartz 2015).
All of this will lead to central questions regarding how culture shapes
our ethics – questions that are ever more pressing as digital media
make cross-cultural communication increasingly commonplace.
Difficulties …
To begin with, we may agree that consequentialism becomes suspect
when it leads us to violate what we may take to be (near-)absolute
human rights. That is, the utilitarian mantra of “the greatest good for
the greatest number” argues that the sacrifice of the few for the good
of the many is justifiable. We certainly make this argument in
wartime, when soldiers, by definition, are those whose lives are
potential sacrifices for the good of the many. But these days, we may
be less sympathetic to similar arguments that could be made, for
example, regarding enslavement. That is, a utilitarian can argue that,
just as it would be ethically justified to sacrifice a comparatively small
portion of the population (soldiers) for the sake of the greater good, so
we can justify the loss of certain freedoms and rights of a few (slaves)
if we can show that these costs are overridden by the greater benefits
such slaves would provide for the larger society. If we wish to argue
against the utilitarian at this point, we may do so by reaching for some
notion of (near-)absolute human rights – e.g., rights to life, liberty,
and the pursuit of property. If, as modern deontologists would argue,
these rights exist and are (near-)absolute, then they may never be
violated – for example, by turning some portion of the population into
slaves – even if to do so might lead to greater pleasure and enjoyment
on the part of everyone else.
Likewise, we might admire the courage of protesters – for example,
during the civil rights movement of the 1960s or in more recent
political protest movements around the world – who practice the
nonviolent pacifism of a Gandhi or a King, sometimes with remarkable
success. If we are deontologists, we would say that they are doing the
right thing – even if it costs them great personal pain, and even if they
are not always successful in gaining their intended political outcomes.
But many people are not always willing to accept a Kant-like absolute
not to lie, for example. Sometimes, it seems quite clear that lying
would be justified, for example if it were to save a life – and many
lives, even more so.
In fact, Kant developed a more nuanced position in his later works, so
as to make greater ethical room for deception: while we might deceive
others for less than ideal reasons, as deception allows us to hide our
more negative characteristics while nonetheless developing a more
virtuous character, it can help us become better persons (Myskja
2008). As Kant’s own transformation suggests, whether or not
deontological approaches can consistently make room for what appear
to be justified and important “exceptions to the rule” is a central
question for defenders of this approach.
DISCUSSION/REFLECTION/WRITING QUESTIONS: A FIRST GO AT ETHICAL
THEORY
Given this initial overview of consequentialist vs. deontological
approaches, review the initial example of promise-keeping vs. enjoying
pizza and beer with friends. In particular:
(A) How did you initially analyze the dilemma – i.e., more as a
consequentialist and/or more as a deontologist? (As the use of the
“and/or” suggests, while there are sharp differences between the two
positions, it is possible for us to use both in some combination or
another.)
(B) Now that you’ve had a chance to review and explore these two
frameworks, try applying them to another ethical dilemma – ideally,
one affiliated with the use of digital media. In doing so:
(i) Describe the dilemma as fully and accurately as you can.
(ii) Explain what your own initial response to this dilemma might be.
That is, what would you decide to do, and how would you decide what
to do?
(iii) Then apply each of these frameworks to the dilemma as best you
can – perhaps with the help of cohorts and/or your instructor. Make
clear how each framework leads to a given outcome or decision
regarding possible acts or choices.
(C) Given the dilemma you choose to analyze, does the
consequentialist approach lead to the same ethical conclusion as a
deontological approach or to a different one? Especially if the
outcomes are different, which outcome more closely fits with your own
initial response to this dilemma (i.e., your response in B.ii)?
(D) Especially if your initial response/s mesh/es well with either a
consequentialist and/or a deontological response, do you see any
additional reasons, insights, arguments, analytical approaches, etc.,
offered by consequentialism and/or deontology beyond those that you
initially used in approaching this problem?
What are these, and do you think that they may prove useful in
approaching other ethical dilemmas as well? (In Kantian terms: can
you universalize these – or are they just useful in this particular case?)
(E) Especially if the outcomes of these two different approaches are
different, what does this difference mean? That is, are we forced, for
example, to choose between one or the other approach, such that one
is always right and the other always wrong?
If you say “yes,” can you justify (provide good reasons, argument,
evidence for) your response?
If you say “no,” again, can you justify (provide good reasons,
argument, evidence for) your response?
3. Meta-ethical frameworks: Relativism, absolutism
(monism), pluralism
Ethical relativism
These contrasts between utilitarian and deontological ethics suggest,
on first glance, a meta-ethical view called ethical relativism. That is, in
the face of (often radically) different ethical frameworks and claims, it
is tempting to believe that these differences must mean that there are
no universally valid ethical norms, values, approaches, etc. Rather, all
such norms, values, and approaches are valid only relative to (i.e.,
within the domain of) a given culture or group of people. Such ethical
relativism is even more tempting as we gain more knowledge and
experience of how people live, think, and feel in cultures different from
our own – a knowledge increasingly commonplace in a world ever
more interconnected by digital media.
Ethical relativism offers two chief advantages. First, it fosters
tolerance of the views and practices of Others (those who are different
from ourselves). Such toleration is itself an important ethical value;
generally, it seems, the world could do with much more tolerance of
important ethical (and cultural) difference. Second, ethical relativism
offers a certain kind of relief: if values and practices are always and
only legitimate in relation to a specific culture, then we need look no
further for values, practices, frameworks, etc., that might claim
genuinely universal validity. This latter task is indeed hard (but, we
will see, not impossible) work. Ethical relativism gives us the excuse
and rationale we need to dismiss this task.
In my view, ethical relativism enjoys a third advantage: in some
important instances, it appears to be true. For example, in
Switzerland and Germany, guests are expected to show respect at a
party by shaking hands with not only the hosts, but also all the guests,
before leaving. In the US, there is no such compunction. For their part,
people in the US often hug one another when greeting or departing –
including university colleagues. Doing so in a Germanic culture, by
contrast, is almost never appropriate. At first glance, then, there
appears to be no absolute right or wrong regarding such
greeting/parting rituals. Rather, what is right in Germanic cultures
often seems bizarre in the US, and what is right in the US can border
on the offensive in Germanic cultures. (That said, we will see in the
discussion of ethical pluralism that these differences in
greeting/parting rituals may not be quite so absolutely relative as they
first appear.)
Wenjia Zhang
Wenjia Zhang
But ethical relativism also faces two especially important difficulties.
First, it is logically incoherent – and this in two ways. To begin with,
the ethical relativist faces a simple, but fundamental, contradiction: on
the one hand, she or he wants to argue that there are no universally
valid values, norms, practices, etc.; on the other hand, she or he
concludes that we must thereby be tolerant of ethical norms and
practices different from our own. (Just to be clear: we can get to this
tolerance in other ways, as we will see below in the section on ethical
pluralism.) But tolerance thereby appears to emerge as itself a
universally valid ethical norm or value – i.e., one that the ethical
relativist argues we all should agree upon and follow.
Hence, the position of ethical relativism seems caught in a
fundamental contradiction: if all ethical values, norms, and practices
are indeed valid or legitimate only in relation to a given culture or
time, then it would seem that tolerance must likewise count as only a
relative value. And so, if there are those who are rigidly intolerant on
some point – for example, the white racist’s intolerance for people of
color – it is not at all clear how the ethical relativist can coherently
insist that such a person, as a product of a given culture and time,
should rather have exercised tolerance.
The second logical problem for the ethical relativist is somewhat more
complex. The primary argument for ethical relativism can be put as
follows:
(Premise 1): If there are no universally valid values, practices,
beliefs, etc., then we would expect to find diverse ethical values,
practices, beliefs, etc., in diverse cultures and times.
(Premise 2): We do find diverse ethical values, practices, beliefs,
etc., in diverse cultures and times.
(Conclusion): Therefore, there are no universally valid values,
practices, beliefs, etc.
In logical terms, this argument commits the basic fallacy of affirming
the consequent.
To see that this argument is a fallacy, consider another argument that
uses the same form as this one:
Wenjia Zhang
Wenjia Zhang
Wenjia Zhang
(Premise 1): If you like strawberry-flavored gum, then we can
expect to find a red-colored packet of gum in your pocket.
(Premise 2): We do find a red-colored packet of gum in your
pocket.
(Conclusion): Therefore, you must like strawberry-flavored gum.
While this seems sensible enough, it takes only a little reflection to see
that both the first and second premises could be true, but the
conclusion is not necessarily true: perhaps you’ve switched to
cinnamon-flavored gum today, which also comes in a red-colored
packet?
Both arguments commit the same fallacy – meaning, the conclusion
does not necessarily follow. Back to the first argument: it is possible
for us to find diverse values, beliefs, practices, etc., in diverse cultures
for other reasons besides the one offered in the first premise (i.e., that
there are no universally valid values, beliefs, practices, etc.). As we will
explore more fully below, the meta-ethical position of ethical pluralism
argues precisely that these diverse values, beliefs, practices, etc., are
the result of diverse interpretations/applications/understandings of
shared ethical norms.
The debate between ethical relativists and ethical pluralists is ongoing
– one we will reflect upon further in subsequent reflection, discussion,
and writing questions. But, at this juncture, the crucial point is: if
there are plausible alternative reasons for our observing diverse
practices, beliefs, norms, etc., other than just the one claimed by the
ethical relativist (i.e., there are no universally valid norms in the first
place), then the argument for relativism is simply not valid.
The second set of objections against ethical relativism center on the
arguments that seek to show that ethical relativism can actually work
against the sort of tolerance and mutual understanding that it seems
to endorse and that makes it so attractive. Again, this involves two
elements. First, ethical relativism forbids any sort of ethical judgment
about “the Other” – the person whose values, beliefs, practices, etc.,
are different from our own – because, it is argued, they are the product
of a different culture, time, etc. But this means, for example, that those
raised in the United States and the United Kingdom can neither praise
the 2018 Nobel Peace Prize winners, Denis Mukwege (Democratic
Republic of Congo) and Nadia Murad (Iraq), as moral heroes (for their
work to end sexual violence as a weapon of conflict and war), nor
condemn the Holocaust as a moral monstrosity. Ethical relativism
thus paralyzes moral judgment. Such a paralysis requires us to accept
genocide in Rwanda, rape-rooms and rape as terror in war, the use of
babies and children as carriers of explosives in suicide bombings – or
systematic oppression of women within one’s own country as part of
the “culture” of a given religious group.
Moreover, Mary Midgley ([1981] 1996) has argued that ethical
relativism further leads to what she calls moral isolationism. This view
presumes that there is an absolute boundary between specific cultures.
This boundary not only prevents us from making ethical judgments
about the values, beliefs, practices, etc., of “the Other,” but thereby
suggests that the members of one culture can never learn or gain
anything of value (ethical or otherwise) from the members of another
culture. But the history of how diverse cultures have emerged over
time – that is, precisely through processes of intermixing and
hybridization with others – shows this to be false:
If there were really an isolating barrier, of course, our own culture
could never have been formed. It is no sealed box, but a fertile jungle
of different influences – Greek, Jewish, Roman, Norse, Celtic and so
forth, into which further influences are still pouring – American,
Indian, Japanese, Jamaican, you name it. The moral isolationist’s
picture of separate, unmixable cultures is quite unreal. … Except for
the very smallest and most remote, all cultures are formed out of many
streams. All have the problem of digesting and assimilating things
which, at the start, they do not understand. All have the choice of
learning something from this challenge, or, alternatively, of refusing to
learn, and fighting it mindlessly instead.
(Ibid., 119)
Especially as digital media dramatically accelerate these processes of
encountering other cultures, we can indeed see rapid cultural change
in our own day, described in part in terms of cultural hybridization
and the development of “third cultures.” Digital media thereby
confront us with a seemingly overwhelming range of cultural diversity
– thus dramatically heightening the temptation toward ethical
relativism. At the same time, however, a world increasingly interwoven
precisely by digital media and computer networks only amplifies the
force of Midgley’s insistence – “Morally as well as physically, there is
only one world, and we all have to live in it” ([1981] 1996, 119). Insofar
as ethical relativism leads to moral isolationism and a perhaps fatal
paralysis of moral judgment, these logical outcomes fly in the face of
what we actually do in the contemporary world: we evaluate and make
judgments about those elements of cultural practices, beliefs, norms,
etc., different from our own that we will accept or reject.
Ethical absolutism (monism)
Opposite to ethical relativism is a position often called ethical
absolutism or ethical monism. Briefly, this view insists on the
following:
There are universally valid norms, beliefs, practices, etc. – that is, such
norms, beliefs, practices, etc., define what is right and good for all
people at all times and in all places.
What is often tacit or unstated for the ethical absolutist is the
additional claim:
I/we know what those norms, beliefs, practices, etc., are – completely,
clearly, unequivocally.
This may seem like an odd claim to spell out, but, as we will see, this is
an especially crucial element of the ethical absolutist’s position.
Finally, the ethical absolutist will thereby have to argue:
Those norms, beliefs, practices, etc., that are different from the ones
we know to be universally valid must therefore be wrong (evil, invalid,
etc.).
In this way, the ethical absolutist is in the position both to applaud
those beliefs and behaviors that agree with his or her own view of what
is universally valid, and to condemn those beliefs and behaviors that
differ from his or her own.
Wenjia Zhang
Wenjia Zhang
Given this meta-ethical framework, the ethical absolutist enjoys at
least one advantage over the ethical relativist: the ethical absolutist
can coherently and forthrightly applaud or condemn the values,
beliefs, practices, etc., of others – for example, she or he could applaud
a Denis Mukwege and Nadia Murad, and condemn the Holocaust. At
the same time, however, this leads, obviously, to the intolerance of
diversity that the ethical relativist finds so distasteful and destructive
(and rightly, at least up to a point).
The contrasts between the ethical relativist and the ethical absolutist
usually work around first-order ethical norms, values, practices, etc. –
for example, abortion and euthanasia, war and peace, sexual
identity/identities and relationships, freedom of expression, our
treatment of animals and the environment at large, the role of the law
vs. individual conscience, etc. For example, one could take an
absolutist position either for or against abortion. An ethical absolutist
might hold that all life is sacred – and that the baby/fetus in the
mother’s womb is a sacred life that must be protected at all costs,
including, unfortunately, the cost of the life of the mother in certain
circumstances. And, hence, abortion is never justified, even to save the
life of the mother. Another ethical absolutist might agree that all life is
sacred – including that of the mother; and so, if, say, a monstrously
deformed baby/ fetus thereby directly threatens the life of the mother,
it is morally permissible – indeed, morally required – to remove and
destroy the baby/fetus for the sake of saving the mother’s life. While
the two absolutists will thus profoundly disagree with each other, an
ethical relativist will say, in effect, to each his or her own; neither
position is ultimately “right,” but we should learn to tolerate important
ethical differences such as these and go on.
Suffice it to say that the ethical relativist’s response here will satisfy
neither of our ethical absolutists. But the primary point here is to
move to the second-order or meta-ethical level of discussion – i.e., to
apply these meta-ethical positions to the ethical frameworks of
utilitarianism and deontology. Hence, we can ask: how would these
two positions have us respond to the differences between utilitarian
and deontological approaches?
Roughly, it would appear that the ethical absolutist would require us
to accept one of these approaches – and thereby reject the other. The
ethical relativist, by contrast, would likely say: it doesn’t matter –
neither view can claim universal validity. Indeed, it’s a waste of time to
wrestle with this question, since there is no ultimate right or wrong in
any event – it’s all a matter of culture, individual preference, etc.
REFLECTION/DISCUSSION/WRITING QUESTIONS: RELATIVISM AND
ABSOLUTISM
1. Given the accounts of ethical relativism and ethical absolutism,
which of these positions better describes your own with regard to the
following (first-order) ethical claims and issues?
(A) The destruction of human life – and most especially innocent
human life – is always wrong; hence, abortion is never justified.
(B) Our right to determine what happens to our own bodies is the
most fundamental of human rights. Hence, a woman has an absolute
right to determine what happens to her body – and this includes the
right to abortion, especially if her own life is imperiled by a pregnancy.
(C) Killing is always wrong – even in self-defense.
(D) Killing is sometimes justified – beginning with self-defense.
(E) You should always keep a promise.
(F) Sex before marriage is morally acceptable.
(G) (Suggest additional “hot-button” moral issues for discussion and
reflection.)
2. In response to these – and/or other – issues, it is probable that you
will find that you are an ethical relativist with regard to some and an
ethical absolutist with regard to others. Insofar as this is the case, can
you begin to sort out and articulate what arguments, evidence, and/or
other sorts of reasons you might have for supporting your position
(i.e., as either an absolutist or a relativist) vis-à-vis a given issue?
Beyond relativism and absolutism: Ethical pluralism
I hope it is beginning to be clear that, whatever the strengths and
advantages of both ethical absolutism and ethical relativism, neither
position is fully satisfactory. To begin with, if the previous reflection,
discussion, and writing exercise has been successful, you will have
discovered that – like most people in my experience – there are ethical
issues about which you may be profoundly absolutist and others that
seem to be best left to a sort of relativist tolerance.
But this is not especially coherent: ethical absolutism and ethical
relativism make mutually exclusive claims – there are / are not
universally valid norms, values, practices, etc. How can we coherently
hold both of these claims together?
As you’ve likely guessed, there is a third position – ethical pluralism –
that seeks to resolve some of the problems faced by relativism and
absolutism.
Ethical pluralism basically argues that the ethical absolutist may be
right – with regard to his or her opening premise: there are values,
norms, practices, etc., that are valid for all human beings at all times
and in all places. Unlike the absolutist, however – who insists that
these values, norms, practices, etc., apply in exactly the same way at
all times and in all places – the pluralist argues that it is possible
(indeed, inevitable and desirable) to interpret/understand/apply
these norms in diverse ways in diverse contexts. In this way, the
ethical pluralist is able to agree at least partially with the empirical
observation highlighted by the ethical relativist. Obviously, there are
different practices in diverse times and cultures. But, rather than
claiming (as the relativist’s argument does – invalidly, we have seen)
that these different practices demonstrate the absence of universally
valid norms and values, the ethical pluralist argues that these diverse
practices are the result of how different contexts will require us to
interpret and apply the same norm in sometimes strikingly different
ways.
For example, it is easy to observe that people with kidney disease are
treated differently in different cultures and places. In the United States
– at least for those who can afford good health insurance – kidney
dialysis, despite its enormous expense, is available more or less
without regard for the patient’s age. By contrast, in the 1990s, policies
Wenjia Zhang
aimed at limiting costs on the part of the UK National Health Service
(NHS) resulted in no one over the age of 75 receiving kidney dialysis,
despite their diagnosed need (Musgrave 2006, 9). (Happily, these
policies have changed considerably – but let’s ignore that for the
moment, for the sake of the example.) Lastly, at least early in the
twentieth century, in the harsh environment of the Canadian arctic, an
elderly member of the Kabloona community who was no longer able to
contribute to the well-being of the community might voluntarily
commit a form of suicide (Boss 2013, 9f.; see Ess 2007).
Again, the ethical relativist argues that these three different practices
show that there are no values or norms shared universally across
cultures. For the ethical pluralist, however, these three practices stand
as three diverse interpretations, applications, and/or judgments as to
how to apply a single norm – namely, the health and well-being of the
community – in three very different environments and cultures. So, at
least the relatively affluent in the US can afford the health insurance
that will provide kidney dialysis without age limit; but, even in a
relatively wealthy nation such as the UK, failure to set limits on
subsidized treatments would (at the time) have bankrupted the
National Health Service. Finally, in the unforgiving environments of
the Kabloona, the well-being of the community would be jeopardized if
scarce resources were diverted to caring for those who no longer could
contribute to the community. Hence, such care is literally not
affordable by the community – nor, apparently, is it expected by the
individual. The practices of each of these communities clearly differ.
But, for the ethical pluralist, these different practices rest upon a basic
agreement on the well-being of the community as a shared norm or
value. Each practice, simply, represents a distinctive interpretation of
that norm; the diverse contexts of these communities require each of
them to interpret and apply that norm differently.
The ethical pluralist can hence agree with the ethical relativist that (a)
we do observe diverse practices as we move through different cultures
and times, and that (b) we should tolerate these differences – rather
than condemn them straight out, as the ethical absolutist is forced to
do – at least insofar as we can understand them to be different
interpretations of a shared norm or value. But the ethical pluralist,
unlike the ethical relativist, does not thereby tolerate any and all
practices. (Recall: such tolerance entails for the ethical relativist a
serious logical contradiction.) Rather, if a practice – for example,
genocide – clearly violates a basic norm or value (in this case, the well-
being of the community, at least as understood as an inclusive human
community rather than an exclusive tribal community), then the
ethical pluralist can condemn such a practice as immoral.
And so the ethical pluralist can overcome some of the chief difficulties
of ethical relativism, including its logical incoherence and its inability
to distinguish between Nobel Peace Prize-winners and the Holocaust.
At the same time, however, the ethical pluralist shies away from the
sort of intolerance for difference that often follows from ethical
absolutism. To recall: the ethical absolutist seems restricted to one and
only one set of values and norms that must be interpreted, applied,
and practiced the same way by all people in all places and at all times
– and so any variation from this one set of norms and practices must
be rejected as morally wrong. (In the example of kidney dialysis, a
moral absolutist located, say, in the US might then well condemn the
practices of the Kabloona as immoral.) By contrast, the ethical
pluralist can tolerate – indeed, endorse – these differences in practice,
insofar as they can be shown to reflect diverse interpretations and
applications of a shared norm or value. In these ways, ethical
pluralism seeks to take up at least a limited version of the tolerance for
difference enjoined by the ethical relativist, while avoiding a tolerance
so complete as to paralyze ethical judgment entirely. An ethical
pluralist does so while at the same time taking up at least a limited
affirmation of universally valid values, norms, and practices as
endorsed by an ethical absolutist, yet avoiding the ethical monism and
intolerance of difference that such absolutism easily falls into.
Strengths and limits of ethical pluralism
Ethical pluralism thus provides us with an important way of
understanding and responding to the sometimes radical differences
that we encounter, especially at a global level.
Negatively: if we can choose only between ethical relativism and
ethical monism, then any effort to undertake a digital media ethics
that might “work” cross-culturally is doomed to two equally
unattractive choices: either we follow the relativist and tolerate any
and all practices (saving us, admittedly, the difficult work of having to
think about any of this at all …), or we adopt an absolutism that would
result in a kind of ethical colonialism – i.e., the imposition of a single
set of practices upon all peoples, because any difference from the right
set of values and practices must be wrong.
Positively: ethical pluralism allows us to see – in some important
cases, at least – how people in diverse cultures may share important
norms and values; but, at the same time, we are able to interpret and
apply these norms and values in sometimes very different sorts of
practices – ones that reflect diverse cultural contexts and traditions.
Ethical pluralism thus allows us to have a global digital media ethics –
one that provides a shared set of guidelines for how we may behave
ethically in relationship with one another. But these shared norms and
values are interpreted through the lenses of different traditions and
applied in different cultural contexts. These different interpretations
or applications thereby allow us to preserve the practices and
characteristics that make each culture distinctive and unique. In this
way, ethical pluralism is a crucial element of the “ethical toolkit” we
need if we are to develop a global ethics that respects and preserves
diverse cultural traditions and identities.
Ethical pluralism enjoys two additional strengths. First, it is a way of
approaching ethical matters that is found not only within Western
traditions (beginning, at least, with Plato and Aristotle, but extending
into contemporary ethical frameworks such as feminism [see Warren
1990]) but also throughout diverse religious and philosophical
traditions such as Islam (Eickelman 2003), Confucian thought (Chan
2003), and others. Ethical pluralism thus appears to be a widely
shared and recognized way of approaching ethical differences – not
simply a provincially Western way. In particular, Shannon Vallor’s
extensive synthesis of global traditions of virtue ethics, as then
reformulated specifically to help us come to grips with the multiple
ethical challenges of contemporary technologies, explicitly includes
this pluralistic approach in turn (2016, 54f., 64).
Second, ethical pluralism appears in fact to “work” in contemporary
practices. Perhaps the most important example here is the issue of
privacy (Ess 2006; Hongladarom 2007). As we have seen in chapter 2,
expectations of privacy and correlative data privacy protection laws
vary from country to country – in part as they rest on dramatically
different, if not contradictory, understandings of human beings. But it
is arguable that there has been an increasing recognition of a shared
notion of privacy that holds for both Western and non-Western
countries and cultures. This shared notion is interpreted and applied
in different ways, reflecting first of all the differences between cultures
in terms of the importance they place on the individual vis-à-vis the
community. The diverse practices of data privacy protection thereby
reflect – and, more importantly, preserve – some of the fundamental
values and traditions of each culture. In this way, ethical pluralism
seems to “work” as an important component of a global information
and computing ethics. And so we might expect that, in other issues of
digital media ethics, pluralism will likewise emerge as an important
strategy for preserving cultural differences while developing a shared,
genuinely global ethics.
Hongladarom has further shown how ethical pluralism works in
praxis regarding the deep differences between Confucian and
Buddhist understandings of selfhood vis-à-vis a shared right of respect
for the person online (Hongladarom 2017). At the same time, however,
ethical pluralism will not resolve all the differences we encounter as
different cultures and traditions approach the ethical issues of digital
media. To use the example of the Muhammad cartoons (Debatin
2007), for at least many (though by no means all) religious believers,
cartoons that can only be seen as blasphemy must not be published.
For the editors of the Danish newpaper Jyllands-Posten, however,
essential ethical and political values were at stake in commissioning
and publishing the cartoons – namely, freedom of expression and
freedom of the press (Warburton 2009, 18–21, 52). Add to this the
cultural observation that, for most Danes, anything – even the queen –
is an appropriate occasion for humor (at least, up to a point). It is by
no means clear how the conflict here can be resolved in a pluralist
fashion. Such an analysis would have to show that these two views are
in fact not as contradictory as they appear – that they are, rather,
simply diverse interpretations of a shared ethical norm (which
one[s]?). (For additional critiques, see Capurro 2008.)
Hence, in the face of diverse cultural norms, beliefs, and practices, we
will not always be able to resolve these sometimes deep and
irreducible differences by way of an ethical pluralism. More broadly,
then, in the face of such differences, we are obliged to discern whether
we most justifiably understand and respond to these differences as an
ethical relativist, an ethical absolutist, and/or an ethical pluralist.
REFLECTION/DISCUSSION/WRITING QUESTIONS: META-ETHICS – A FIRST
RUN
As many of the examples we’ve explored in this book should make
clear, the culture(s) which surround us, whether during our
upbringing and/or in our work and leisure as mature people, play a
central role in shaping our ethical thinking. (At the same time, readers
should keep in mind here the important caveats and difficulties of
using cultural generalizations: see chapter 2, “Interlude,” pp. 49–53.)
In particular, the comparative ethicist Bernd Carsten Stahl notes that,
since the twentieth century, at least within the English-speaking
world, utilitarian approaches have dominated over alternatives. By
contrast, deontological approaches – especially as rooted in Kant and
then the contemporary German philosopher Jürgen Habermas – have
been favored in the Germanic countries, including much of
Scandinavia. These in turn contrast with what Stahl characterizes as
French moralism in Montaigne and Ricoeur. On Stahl’s analysis, this
approach to ethics is teleological – i.e., oriented toward the goal or
telos of discerning and doing what is necessary for the sake of an
ethical and social order that makes both individual and community life
more fulfilling, productive, etc., through “the propagation of peace and
avoidance of violence” (Stahl 2004, 17).
As we will see more fully below, these views further contrast with non-
Western traditions. Broadly, modern Western traditions have
emphasized the individual as the primary agent of ethical reflection
and action, especially as reinforced by Western notions of individual
rights. Certainly, these traditions further recognize that individuals’
actions are made within and affect a larger community; and, as we
have seen in the examples of Scandinavian notions of allemannsretten
(“all people’s rights”: chapter 3, pp. 115–16) and feminist notions of
relational autonomy (chapter 2, pp. 77–8), there are ethical traditions
in the modern West that indeed emphasize greater attention to
community, not simply individual, actions and goods. But, at least in
comparison with modern Western traditions, non-Western traditions
– including various forms of Buddhism, Confucian thought, and
indigenous traditions in Africa, Australia, and the Americas – lay
greater emphasis on the community and community well-being as the
primary focus for ethical reflection and choice.
This ethical map becomes even more complicated, first of all, as we
recognize that these generalizations will only go so far: again, each
cultural generalization immediately implies counterexamples,
additional layers and influences, etc. The complexity grows further as
we add both:
(a) premodern and contemporary ethical traditions – as we are about
to see, the virtue ethics expressed by Socrates and Aristotle and its
contemporary expressions; and
(b) contemporary ethical frameworks such as feminism, and especially
the ethics of care, along with environmental ethics.
While overwhelming at first, exploring these diverse ethical
approaches is both: (a) unavoidable, especially as digital media allow
more and more people around the globe to communicate and interact
with one another; and (b) necessary – first of all in order to overcome
our own ethnocentrism and its attendant dangers. Such exploration
should further help us to make better-informed choices regarding our
own ethical frameworks and norms – and, ideally, assist us in moving
toward a more inclusive, genuinely global digital media ethics that
recognizes and fosters our ethical differences alongside our shared
norms and values.
At this stage, however, it may be helpful to take a first run at learning
how to apply the meta-theoretical positions of ethical relativism,
monism, and pluralism.
1. Presuming your own prevailing cultural context(s) and/or culture(s)
of origin are primarily Western, review Stahl’s characterization of
various national cultures as principally utilitarian, deontological, and
teleological.
(A) Which, if any, of these frameworks seems closest to what you
observe in your culture to be a prevailing way of making ethical
decisions? Illustrate your response with an example or two – ideally,
one drawn from an ethical issue evoked by the use of digital media.
(B) Which, if any, of these frameworks seems furthest away from what
you observe in your culture to be a prevailing way of making ethical
decisions? You can illustrate and support your response here by
applying this framework to the example(s) you describe in 1.A.
(C) What are the results? That is, do the two frameworks that you
identify and apply in 1.A and 1.B issue in conflicting ethical
conclusions (e.g., undertaking otherwise illegal music downloading
because the benefits of doing so seem to outweigh the costs – i.e., a
utilitarian analysis – vis-à-vis rejecting such an activity because it
violates what may be argued to be a just law – i.e., a deontological
analysis)?
And/or: do these two frameworks end up endorsing the same, or at
least coherent and complementary, ethical conclusions or claims? (For
example, we saw in chapter 2 how both deontological and utilitarian
approaches to privacy in the West endorse individual privacy rights as
essential – though for characteristically different reasons.)
And/or: do these two frameworks issue in (at least, seemingly)
contradictory results?
(D) Especially if these two frameworks issue in different, perhaps
contradictory, results, how do you respond? That is: do you interpret
or understand these differences primarily as
(i) an ethical relativist?
(ii) an ethical monist?
(iii) an ethical pluralist?
However you respond to these differences, do your best to support and
justify your answer with one or more arguments, elements of evidence,
etc.
2. The same set of questions – but now encompassing a global range of
ethical frameworks – may be asked. In particular: if your cultural
context(s) and/or culture(s) of origin are non-Western, so that you
already have a strong familiarity with especially non-Western ethical
frameworks, now might be a good time to undertake the more global
version of these questions. (And/or: you and/or your instructor may
decide it’s better to wait on these until the further review of the
discussion of these frameworks that is about to follow.)
Either way, this exercise should begin by asking you to take up two
frameworks – one characteristically Western (e.g., utilitarianism) and
one characteristically non-Western (e.g., Confucian, Buddhist, Hindu,
African, etc.). With these two frameworks as your starting point, the
questions in (1) can then be pursued.
4. Feminist ethics
As the discussion so far demonstrates, virtually all of the philosophers
who have developed important ethical frameworks in Western (and, as
we will see, Eastern) traditions are men. Especially for the second-
wave feminists of the 1960s and 1970s, this observation naturally leads
to an important question: is it possible that the conceptions,
approaches, values, etc., that make up prevailing ethical (and other
philosophical) frameworks reflect characteristically “male” or
“masculinist” ways of knowing and thinking? Or, to state it negatively:
is it possible that these prevailing ethical frameworks thus tend to
ignore or exclude what are characteristically women’s ways of knowing
and reflecting on ethical issues?
In the domain of ethics – specifically, in the area of developmental
psychology concerned with how people reflect on and seek to resolve
ethical difficulties – these questions were given particular force
through the work of Carol Gilligan. Gilligan’s landmark book In a
Different Voice (1982) documented both important parallels and
distinctive differences between the ways in which men and women
characteristically approached central ethical dilemmas. Briefly put,
Gilligan’s interviews with women facing difficult ethical choices
(including the possibility of abortion) challenged the then-prevailing
schema of ethical development established through the work of
Lawrence Kohlberg – work that, in fact, built on observations of and
interviews with men exclusively. On the one hand, for both Gilligan
and Kohlberg, the evidence of their interviews and observations
suggested that individuals develop their abilities to recognize and
come to grips with ethical issues over time and in ways that can be
described by a three-stage schema (with each stage in turn involving
two sub-stages). Preconventional morality, describing how pre-
adolescents grapple with ethical matters, works on a simple reward–
punishment schema: one is “good” because good acts are rewarded,
and one (usually) avoids being “bad” because bad acts are punished.
Conventional morality, characteristically the moral stage of young
adolescents and adults, reflects the values, practices, and expectations
prevailing in the larger society, with an emphasis on justice and
correlative notions of recognizing and preserving basic individual
rights – at least as these contribute to the maintenance of the status
quo. Postconventional morality, by contrast, represents a move into
significant sorts of ethical autonomy (in Kant’s term), as individuals
take conscious responsibility for their ethical principles and reflections
in new ways, so as perhaps to radically critique and re-evaluate
prevailing social claims regarding rights and justice. As is often the
case, such reflections can lead individuals to draw new ethical
conclusions regarding right and wrong that run against the prevailing
morality of their larger society. Historically, such postconventional
moralists have been important for what we think of as ethical and
social progress: their postconventional morality has led them to
challenge prevailing social practices and values and, in the view of
subsequent generations, helped to lead society more broadly to a set of
values and practices that are seen as ethically preferable over earlier
ones. (To be sure, as the experience of these exemplary thinkers makes
clear, moving to a postconventional stage is difficult – indeed,
Kohlberg claimed that most people never move beyond the
conventional stage.)
While her findings support the outlines of this large framework,
Gilligan found that, as they moved through these stages, women’s
moral experiences demonstrated important differences. For our
purposes, the most important differences are as follows. For Kohlberg
(and, to be fair, for most ethicists in the modern West), the key to
moving beyond conventional morality is the critical use of reason –
where reason is understood to focus especially on general principles,
including rules of social justice and individual rights. So a Martin
Luther King, Jr., for example, can argue that segregation laws are
unjust because they violate the basic principle of justice in a
democracy and the modern liberal state; only those laws that rest on
the consent of the governed are just. But segregation laws were passed
by a white population, in states where the people of color also affected
by these laws had no vote – and hence no possibility of exercising
consent. Hence such laws are unjust. On the basis of such arguments,
King can then justify disobeying the law of the land – in
developmental terms, going beyond conventional morality to a
postconventional morality based on clear principles of justice and
rights (King [1963] 1964).
To be sure, Gilligan found that women certainly employ reason –
minimally, the capacity for inference and the recognition of important
general principles – in confronting their ethical quandaries. But, in
addition to reflection on general principles, she found that women as a
group tended to make three distinctive maneuvers. To begin with, as
Piaget had already observed, little girls may be less concerned than
their male counterparts with making sure, for example, that all the
rules of a game are followed (justice), while they may be more
concerned that everyone within a given group has the feeling of being
treated fairly, of being included, etc., even if this sometimes means
breaking the rules (Gilligan 1982, 32–8). But this means, second, that
women as a group tend to focus on the emotive dimensions of an
ethical problem. Third, a problem is seen to be ethical especially as it
involves a web of interpersonal relationships, not simply individuals
as “nodes” in those relationships marked only by defined sets of rights,
etc.
So, for example, Kohlberg asked his (male) interviewees to respond to
the “Heinz dilemma.” In this scenario, a husband (Heinz) needs to
obtain life-saving medicine for his wife; but he cannot afford to do so,
and so his pharmacist refuses to provide him with the medicine. In
Kohlberg’s analysis, men as a group tended to analyze this dilemma in
terms of the rights and principles involved – e.g., the right of the
pharmacist to protect his property (and sources of profit and
livelihood) vs. the wife’s ostensible right to life. But, as young women
were presented with this dilemma, as a group they tended to want
more information – first of all, about the relationships between the
three protagonists. For example: would Heinz’s wife really want him to
risk going to jail for her sake? Is it possible that they could talk with
the pharmacist and work out a way to pay for the drug over time (ibid.,
25–32)?
In these ways, the women’s questions often teased out specific details
about the possibilities and relationships in play that might otherwise
be ignored through an exclusive focus on general principles of justice
and abstract rights. In doing so, the women’s questions may suggest
alternatives to the simple, either/or dilemma presented at the outset –
i.e., either respect the law (and lose your wife) or disobey the law (but
save your wife). So, as some of my own students have suggested: if the
pharmacist is a friend who knows and trusts Heinz and his wife, why
couldn’t he arrange for Heinz to pay for the needed drug over time,
rather than insisting on an all-or-nothing payment?
For Gilligan, women’s ethical development could thus be characterized
as an ethics of care and responsibility for both others and oneself (the
latter, at least, in the post-conventional stage), in contrast with (but
not in opposition to) the ethics of principles, rules, and justice that
characterized the ethical focus of many (but by no means all) men.
Finally, Gilligan emphasized that these two patterns of ethical
development, while clearly different, are not mutually exclusive.
Rather, both patterns are essential – and, ideally, conjoined in a
synthesis that holds both together. (For a more careful discussion, see
Tong and Williams 2018.)
Of course, there are any number of controversial and highly contested
assumptions and claims at work here, as the subsequent development
and debates regarding feminist ethics bring to the forefront. For
example, does Gilligan’s schema run the risk of essentialism – of
assuming or arguing that there is something (an “essence”) about
being biologically female that strongly directs (or simply determines)
that all women must follow the lines of ethical development it
articulates? Feminists insist that such essentialism is disastrous as it
reinforces gender stereotypes used throughout the history of
patriarchy to justify women’s subordination to men. And Gilligan
would deny that she is making such an essentialist assumption.
Despite these and related difficulties, however, Gilligan’s work
inaugurated important new developments in ethical theory, beginning
with greater respect for the positive role of emotions – specifically,
care – as developed more extensively by Sara Ruddick (1989) explicitly
in terms of an ethics of care. Another foundational figure, Nel
Noddings, also highlighted the relational aspects of care ethics: “It is
my committed practice of caring for others that sustains and enriches
this ethical self” (Noddings 1984, 14, in Vallor 2016b, 225).
To be sure, one does not have to be a feminist to take up an ethics of
care: early on in the modern West, David Hume famously argued that
ethical reflection is fully reducible to emotions; but, for some of us,
this goes too far, especially as it runs the risk of thereby reducing all
ethical claims to purely relative ones.
Despite this risk, as we will see again in the context of virtue ethics
(section 5, below), there is a growing recognition from a variety of
sources – feminist ethics, virtue ethics, neurobiology, and comparative
philosophy more broadly – of the central roles played by emotions in
ethical decision-making. For example, Joshua D. Greene (2014) notes
that “Patients with frontotemporal dementia, which typically involves
emotional blunting, are about three times as likely as control subjects
to give consequentialist responses” (Mendez, Anderson, and Shipira
2005, cited in Greene 2014, 701f.). By contrast, “People who are more
empathetic, or induced to be more empathetic, give more
deontological responses” (Conway and Gawronski 2013, cited in
Greene 2014, 703).
These turns toward the integral role played by emotions in our
decision-making process are further accompanied by feminist
attention to what our embodiment means for our thinking/feeling
about the world – how we know and navigate it, starting within our
relationships. To begin with, embodiment entails a non-dual
understanding of the relationship between self and body, as we saw
explored especially by Sara Ruddick in her account of complete sex
(1975, 89; see chapter 5, pp. 186–8). In addition to emotions alongside
reason, embodiment further highlights the role of tacit knowledge,
knowledge that is learned through experience and encoded in our
bodies. By definition, tacit knowledge deeply resists our efforts to
make it explicit and articulate – say, for the purposes of invoking it in
our ethical reflections. But its central role is apparent in our phrases
“my gut feeling” (equivalent to the Danish and Norwegian
magenfølelse) and “following my heart.” (As with the role of emotions,
contemporary neurobiology and cognitive science confirm and
helpfully refine these sensibilities – perhaps most strikingly with
contemporary theories of “the embodied mind” and “embodied
cognition” (Wilson and Foglia 2017).)
These non-dual understandings of body–mind (LeibSubjekt) and
thinking/feeling are further important as they resonate with: (a)
premodern Western understandings of our ethical life as involving
both thought and feeling (e.g., in the Socratic and Aristotelian
conception of phronēsis, a practical ethical judgment that is felt as
much as thought); and (b) non-Western understandings, for example
the Confucian view of the human being as incorporating xin, what
Ames and Rosemont translate as “heart-and-mind,” to make the point
that “there are no altogether disembodied thoughts for Confucius, nor
any raw feelings altogether lacking (what in English would be called)
‘cognitive content’” (1998, 56). The role of emotions in ethics is thus a
shared understanding across a literally global scale; as feminist ethics
brings this role to the foreground, it thereby points toward what may
be a “bridge” concept, a shared understanding between both Western
and Eastern views that will play an important role in any global digital
media ethics.
Moreover, in emphasizing the importance of webs of interdependent
relationships, in contrast with a prevailing emphasis on individual
rights, feminist ethics thereby supported and developed alongside
(then) new forms of environmental or ecological ethics. Briefly, such
ethics extends the modern Western focus on the rational individual
human being as the primary moral agent who deserves moral status,
so as to argue that non-human entities, including not only living
beings but the larger ecological systems they constitute in relationship
with the natural order, also deserve and require moral status and
respect in our ethical reflections.
In these ways, feminist ethics helps us move to a more inclusive and
comprehensive account of how we may come to grips with the ethical
challenges we face.4
Applications to digital media ethics
Arguably, an ethics of care is already at work in a number of choices
and behaviors associated with digital media. As we’ve seen, for those
who enjoy using digital media to copy and distribute songs, videos,
etc., that they enjoy, “sharing is caring.” That is, it would appear that a
primary motive in such sharing is our pleasure in giving to friends and
loved ones the chance to enjoy the same music and videos that we have
enjoyed. In particular, insofar as a sense of self as a relational
autonomy likewise entails an emphasis on care and caring
relationships (Christman 2003, 143), such care is consistent with the
inclusive sense of property rights we saw at work in such sharing
(chapter 3, pp. 114–15).
More specifically, care ethics is explicitly invoked in the design of so-
called carebots – that is, robots intended to take over various chores of
health care (van Wynsberghe 2016).
At the same time, it’s important keep in mind an important limitation
to an ethics of care. Insofar as care ethics stresses the role of our
emotional bonds with one another, it thereby runs the risk of
restricting our ethical focus too narrowly – that is, upon a relatively
small circle of family, friends, and loved ones. Taken to its extreme, an
ethics of care could thus justify our ignoring whole populations around
the globe because, simply, we do not experience a relationship of care
with such populations. But in a world ever more interwoven via digital
media – unless these media help us learn how to care for others
beyond our immediate circles – the ethics of care runs the risk of an
increasingly inappropriate provincialism.
REFLECTION/DISCUSSION/WRITING QUESTIONS: FEMINIST ETHICS AND
DIGITAL MEDIA
In my view, one of the most important contributions of feminist ethics
and an ethics of care is not only that they require us to acknowledge
the significance of emotions, including feelings of care, but also that
they help us learn to think beyond more dualistic, either/or
approaches that have been emphasized in modern Western reflection
and teaching about ethics. An especially prominent example here is
just the notion of the self as a relational autonomy – that is, a sense of
self that overcomes the apparent polarity between individuality and
relationality by conjoining elements of both. By moving toward a
“both/and” logic (or logic of complementarity), in particular, we are
sometimes able to see a third alternative or possibility (or more) –
overlooked by more dualistic ways of thinking – that thereby may help
us resolve what otherwise seem to be intractable dilemmas of the sort
faced by Heinz.
These (for the modern West, new) ways of thinking, moreover, are
valuable not only as they help sustain a much needed environmental
ethics but, further, as such relational thinking may closely resonate
with: (i) contemporary non-Western ethical frameworks (explored
more fully below); and (ii) especially the networked or distributed
character of ICTs and other digital media linked together through the
internet and the Web.
(A) Given what you are able to understand about these two different
logics – a logic of dualism as based on the exclusive either/or and a
logic of complementarity or “both/and” (discussed in chapter 1, pp.
26–8) – as you observe the larger culture around you, which of these
two logics appears to be at work more predominantly than the other?
Be sure to provide an example or two to help illustrate your point.
(B) Identify a central issue in digital media ethics that you have
already analyzed and responded to with some care in the course of
your working through this volume. Review your response: do you
seem to rely on one of these logics more than the other in your
analyses and resolution(s) of this issue? Be sure to explain carefully
how the logic you identify is apparent in your analysis/resolution.
(C) After reviewing your analyses and resolution(s), insofar as they
seem to rest on using one logic more than another, would they be any
different in any significant ways if you were to attempt to make them
using the other logic instead? If so, how? Be sure to explain carefully
how this is so.
[See also the Reflection/Discussion/Writing Question following the
next section, as it takes up both care ethics and virtue ethics.]
5. Virtue ethics
Virtue ethics is both ancient in the West (associated with especially
Socrates and Aristotle) and global, in the sense that we find versions of
virtue ethics in diverse philosophical and religious traditions around
the world (including, as we will see in the next section, in Confucian
and Buddhist thought). In this way, virtue ethics is an important
common ground for ethicists from diverse traditions – one that has
clear potential to serve as a significant component of a shared global
ethics. Indeed, virtue ethics has enjoyed something of a renaissance in
recent decades among Western philosophers for a number of
important reasons – including precisely its potential for providing a
common ethical ground for global ethics. In particular, as we explored
in chapter 4, virtue ethics emphasizes the central importance of our
relationships with others, beginning with friendship: it is hence an
especially appropriate framework in an age of social media, as (a) our
sense of selfhood appears to emphasize relationality more and more,
in part as (b) our relationships – beginning with our “friends” on
social networking sites – are precisely what such venues are designed
to facilitate and foster. More comprehensively, Shannon Vallor
(2016b) has extensively plumbed these diverse global traditions to
develop a list of 12 “techno-moral virtues” that are specifically tuned to
the ethical challenges of a technological era. Her list includes care –
along with: honesty, self-control, humility, justice, courage, empathy,
civility, flexibility, perspective, magnanimity, and “technomoral
wisdom,” i.e., the keystone virtue of phronēsis (ibid., 118–55).
Virtue ethics begins with the sensibility that what we ought to do as
human beings is, first of all, become excellent human beings.
Becoming an excellent human being, more precisely, means to develop
and fulfil our most important capacities as human beings. Clearly, as
individuals, we may have a distinctive set of potential abilities, such as
athletic or musical abilities. But, for Socrates and Aristotle, our most
important abilities as human beings as such, not simply as individuals,
are our capacities to reason – and this in two ways. What Aristotle
(and later Kant) identified as the “theoretical” function of reason
centers on what we now think of as a scientific understanding of the
laws and principles that guide the workings of the physical world. For
the ancient and medieval thinkers in the West, this capacity to
understand reality was important on a number of grounds. In
particular, by understanding reality properly, we as human beings can
then “attune” ourselves to that reality – that is, we can know better
both what to expect of it and how to behave within and in relationship
with it, in order to achieve what the Greeks called eudaimonia – often
translated as “happiness” but better understood as a kind of
fundamental sense of well-being and contentment.
But, if our goal as human beings is to achieve such contentment or
eudaimonia, then it is equally important that we develop what
Aristotle (and, subsequently, Kant) identified as practical reason. Such
practical reason involves first of all our ability – given our best
knowledge of reality and thus of our possible choices and actions – to
make the sorts of analyses and ethical judgments required for us to do
“the right thing,” both for ourselves as individuals (the ethical for
Aristotle) and for our larger communities (for Aristotle, the political).
As we have seen, these sorts of ethical decision-making further require
what Socrates and Aristotle term phronēsis – a practical judgment
that is able to discern the right choice (or, sometimes, choices) among
the possibilities before us.
This capacity for judgment, we can notice, is one that is capable of
learning from its mistakes. So Socrates (as related by Plato) uses the
ship’s pilot and the physician in the Republic as primary exemplars of
people who exercise such judgment, and notes:
a first-rate pilot [cybernetes] or physician for example, feels
[diaisthanetai] the difference between the impossibilities and
possibilities in his art and attempts the one and lets the others go; and
then, too, if he does happen to trip, he is equal to correcting his error.
(Republic, 360e–361a [Plato 1991]; emphasis added; cf. Republic I, 332c–e; VI, 489c; X,
618b–619a/301)
And learning from mistakes means, as Aristotle emphasized, that our
developing these capacities of ethical judgment and analysis, and of
reason more broadly, is an ongoing task: just as the athlete or
physician must constantly practice if she or he is to maintain, much
less improve, his or her abilities, so we as human beings must likewise
cultivate in a conscious and ongoing way our rational abilities,
including our use of phronēsis.
(Many readers will further recognize the term cybernetes as
reminiscent of “cybernetics” – namely, the science of self-correcting
information systems founded by Norbert Wiener. “Cybernetics” is at
the same time in the title of the first computer ethics book, as we saw
above [Wiener 1950]. This means precisely that virtue ethics is “baked
into” the very beginnings of information and computing ethics, as we
will further explore below.)
To put it somewhat differently: being a human being is not something
that is simply given or taken for granted. Rather, becoming a human
being – meaning, a being capable of (among other things) making the
ethical and political judgments required for living a good (“happy”) life
in a community thereby marked by harmony and well-being – is an
ongoing task.
Finally, it is important to emphasize that, while developing our other
capacities – e.g., as athletes, musicians, lovers, friends, parents, game-
players, etc. – is important, for Socrates and Aristotle it is very clear
that there is nothing more important than the task of cultivating and
practicing excellence as a human being – meaning, as a human being
engaged with making ethical and political judgments and choices. In
particular, if we subordinate our cultivation of excellence as ethical
and political beings to any other activity – e.g., the pursuit of wealth or
power – we thereby put our capacity for reason and ethical judgment
at risk. Indeed, Socrates and Aristotle argue that, if we allow our
interests in wealth and power to persuade us to judge and act against
our reason and better judgment, we thereby harm these capacities
(just as we would harm a race-horse, to use Socrates’ analogy, by using
it as a plow-horse instead). But, if we harm and hence diminish these
capacities, we thereby undermine the capacities most central to our
discerning what is genuinely good, pursuing it, and thereby achieving
eudaimonia or well-being.
This is not to say, as some later moralists argued, that we can achieve
eudaimonia only by abstaining from the pursuit of, say, wealth and
power. Rather, Socrates and Aristotle are optimistic that both
eudaimonia, as resulting from pursuing our excellence as ethical and
political beings, and (at least a moderate amount of) wealth and power
can be had together. (Indeed, for Aristotle, a moderate amount of
wealth and power is a necessary condition of cultivating theoretical
and practical reason, and thereby of achieving eudaimonia.) But the
constant danger is to let our interests in wealth and power overshadow
our pursuit of excellence as ethical and political beings – and thereby,
to paraphrase Jesus four centuries later, to gain the whole world but
lose our souls.
So Socrates (again as related by Plato) says, in The Apology:
It is God’s bidding, you must understand that; and I myself believe no
greater blessing has ever come to you or to your city than this service
of mine to God. I have gone about doing one thing and one thing only,
– exhorting all of you, young and old, not to care for your bodies or for
money above or beyond your souls and their welfare, telling you that
virtue does not come from wealth, but wealth from virtue, even as all
other goods, public or private, that man can need.
(The Apology, 29e–30b [Plato 1892]; emphasis added)
In this way, Socrates argues for the absolute priority of human
excellence over all other interests if we are to achieve eudaimonia or
well-being, but insists thereby that our pursuit of excellence will also
lead to the other human goods that we desire and need.
While deontology and consequentialism dominated much of the
ethical discussion among Western philosophers in the twentieth
century, within the last four decades virtue ethics has enjoyed a
remarkable renaissance. Rosalind Hursthouse nicely summarizes why:
for all of their strengths, neither deontology nor consequentialism
seems to address a number of topics required for a complete moral
philosophy, including “moral wisdom or discernment, friendship and
family relationships, a deep concept of happiness, the role of the
emotions in our moral life, and the questions of what sort of person I
should be” (1999, 3).
All of these elements are important – beginning with moral wisdom or
discernment, i.e., phronēsis. As well, as with feminist ethics (above,
pp. 251–5), virtue ethics restores our ethical attention to the
importance of emotions. As we saw in Confucian thought, in contrast
with the Cartesian mind–body split, Ames and Rosemont (1998, 56)
translate xin as “heart-and-mind,” in order to emphasize that thought
and feeling always accompany each other. As in the case of feminist
ethics, when virtue ethics brings to the foreground the importance of
emotions in our ethical lives, it thereby points to a post-Cartesian view
– one that brings Western ethics closer to at least some of its non-
Western counterparts. Doing so may be an essential step in the
development of a more global digital media ethics – that is, one that
“works” in both Western and non-Western cultures and traditions.
Moreover, virtue ethics, as including a focus on the development of
moral judgment (phronēsis), thereby highlights a critical element of
learning how to be human – both alone and with others: most
importantly, as it is only through developing and exercising such
judgment that we can claim to be (relationally) autonomous and (self-
)responsible human beings. Without such judgment, simply, we are
likely only to follow the dictates of others. In these directions, virtue
ethics is deeply interwoven with especially Western traditions of
conscientious objection. The figure of Antigone, in Sophocles’ play of
the same name, is foundational here. Her brother Polyneices fought on
the losing side of the Theban civil war: the victorious King Creon
declares that his body (along with those of all others who fought
against the king’s forces) must remain unburied – a profound
dishonor as well as a stark violation of religious dictates and customs.
Antigone is caught squarely between a superior order (as later theory
would put it) and what her senses of religious propriety and familial
obligation to her brother require. As Socrates – and many others –
would subsequently, Antigone ultimately chooses to disobey Creon’s
order, even though it means her own death. Much of the language in
the play circles around phronēsis and the quest for what moral
wisdom would discern in the face of such a dilemma. As Martha
Nussbaum has pointed out, Antigone thus roots a central feature of
phronēsis as “the idea that the value of certain constituents of the good
human life is inseparable from the risk of opposition, therefore of
conflict” (Nussbaum [1986] 2001, 353, in Wall 2003, 323). More
broadly, this capacity of phronetic judgment is central to modern
understandings of law in constitutional democracies – namely, their
hallmarks of “Self-rule, disobedience and contestability” (Hildebrandt
2015, 10).
Finally, we have seen that some modern Western ethical frameworks
contrast starkly with their non-Western counterparts. Aristotle’s virtue
ethics, however, resonates with similar emphases on becoming an
excellent or exemplary human being as a focus of one’s life that are
found in a number of philosophical and religious traditions around the
world, including Buddhism and Confucian thought (cf. Vallor 2016b,
41). We will explore this more fully below.
Virtue ethics: sample applications to digital media
An initial way of applying a virtue ethics to digital media, as noted in
the previous chapter, is to ask the question: what sort of person do I
want/need to become to be content – not simply in the immediate
present, but across the course of my entire (I hope, long) life? Along
these lines: what sorts of habits should I cultivate in my behaviors that
will lead to fostering my reason (both theoretical and practical) and
thereby lead to greater harmony in myself and with others, including
the larger natural (and, for religious folk, supernatural) orders?
As part of its resurgence in the contemporary West, virtue ethics has
found wide application, beginning with such increasingly urgent topics
as designing ethics for robots (e.g., Coleman 2001; see discussion of
carebots, below). Most broadly, Julie Cohen (2012) draws on the work
of virtue ethicist Martha Nussbaum and communitarian political
philosopher Amartya Sen vis-à-vis a range of issues facing
contemporary users of digital media, including copyright (ch. 3) and
privacy (chs. 5, 6). Most remarkably, virtue ethics, coupled with
deontology, has become central in ICT design broadly. Examples here
include James Hughes’s Buddhist approach to “Compassionate AI and
Selfless Robots” (2012) and Sarah Spiekermann’s foundational
textbook for “eudaimonic” ICT design (2016). More specifically, within
the European Union, central philosophical and policy-related
documents take up the language of flourishing and well-being
(eudaimonia). So Floridi et al. (2018) appeal to human dignity (as
resting on explicitly Kantian notions of autonomy) and flourishing as
the key ethical pillars of their ethical roadmap for moving toward “a
Good AI Society” (2–3). In particular, “self-realisation” is a primary
capacity to be preserved and enhanced by AI: their definition is
instantly recognizable from virtue ethics – namely, “the ability for
people to flourish in terms of their own characteristics, interests,
potential abilities or skills, aspirations, and life projects” (ibid., 4; cf.
Burgess et al. 2018).
While the authors do not make the linkage explicit, this focus on self-
realization and virtue ethics more broadly is inaugurated, as we noted
above, in Norbert Wiener’s foundational text for computer ethics
([1950] 1954: above, pp. 262–3). It is hence especially fitting that the
Institute of Electrical and Electronic Engineers’ work to develop global
standards for new Artificial and Independent Systems focuses on
“ethically aligned design”: the ethics in play here are precisely
deontology and virtue ethics – beginning with Aristotle’s conception of
eudaimonia (IEEE 2019, 2).
In this volume, I have applied virtue ethics especially to the topic of
friendship online (chapter 4) and to pornography* and sex and
violence vis-à-vis robots and computer games in chapter 5.
REFLECTION/DISCUSSION/WRITING QUESTIONS: THE VIRTUES OF CARING,
COURAGE, AND HONESTY VIS-À-VIS CAREBOTS
As noted above, Shannon Vallor has carefully developed a set of
“Technomoral virtues” that she argues are central to good lives of
flourishing in an era deeply shaped by rapidly evolving technologies.
These are: honesty, self-control, humility, justice, courage, empathy,
care, civility, flexibility, perspective, magnanimity, and technomoral
wisdom – the last of which incorporates phronēsis (Vallor 2016b,
120).
One of Vallor’s primary explorations and applications of these virtues
takes up care and the practices (virtues must always be practiced) of
care-giving. The specific example is of caring for elderly parents vis-à-
vis “offloading” the chores and obligations of such caring to carebots.
(For examples of such carebots, see Vallor 2016b, 219.) Caring further
requires the virtue of courage:
Caring requires courage because care will likely bring precisely those
pains and losses the carer fears most – grief, longing, anger,
exhaustion. But when these pains are incorporated into lives sustained
by loving and reciprocal relations of selfless service and empathic
concern, our character is open to being shaped not only by fear and
anxiety, but also by gratitude, love, hope, trust, humor, compassion,
and mercy.
(Ibid., 226)
Lastly, caring and courage are required for confronting our existential
situation with open eyes:
Caring practices also foster fuller and more honest moral perspectives
on the meaning and value of life itself, perspectives that acknowledge
the finitude and fragility of our existence rather than hide it.
(Ibid.)
That is, for Vallor, the large project of developing such virtues is driven
not only by their particular fit and usefulness in a technology-driven
world: still more fundamentally, she invokes the philosopher José
Ortega y Gassett, who foregrounds the central existentialist project of
acknowledging our mortality as an essential step toward discerning
and creating meaning in our lives. Ortega y Gassett is particularly
fitting here as he foregrounds the role of technology in the “project” of
becoming ourselves: “the mission of technology consists in releasing
man [sic] for the task of being himself” (Ortega y Gasset 2002, 118, in
Vallor 2016b, 247). This understanding of technology as emancipatory
– as freeing us to become more fully our best selves – is a theme
announced by Norbert Wiener at the beginning of information and
computing ethics ([1950] 1954, 106). At the same time, Vallor thus
stands among a growing number of contemporary scholars and
researchers who are rediscovering and/or applying existentialist
philosophy in new ways, precisely with a focus on digital media
(Lagerkvist 2016; Ess 2018a).
As we saw in chapter 5 in connection with sexbots (pp. 194–5), Vallor
argues that a primary ethical issue evoked by contemporary
technologies is the problem of “deskilling.” Again, caring is a virtue or
a skill: “It is difficult to know how to care for people well –
emotionally, physically, financially, and otherwise, in the right ways, at
the right times, and for the right persons” (Vallor 2016b, 221). As with
(more or less) all other technologies, carebots are designed to make
our lives easier – in this case, to help “offload” or transfer the less
pleasant and more difficult dimensions of caring, for example, for the
elderly. While much of this would seem to be most welcome – first of
all, for the primary care-givers – Vallor points out that such offloading
thereby reduces our opportunities and requirements to cultivate and
improve on our capacities to care. As we saw in the example of
sexbots, then, the risk of relying more and more on technologies that
demand less and less of us (cf. Turkle 2011) is that we ourselves
become less capable of exercising the virtues requisite for good lives of
flourishing – including caring, loving itself, as well as courage, and
patience, perseverance, and empathy as essential to human
communication, deep friendship, long-term intimate relationships,
and so on. To state this more bluntly: such ethical deskilling, in the
worst-case scenario, renders us more and more like the robots and
machines we interact with (ibid.; Hildebrandt 2015, 71f.).
A. Review the list of virtues affiliated with care: along with care itself,
which of these virtues do you think/feel are indeed central to a good
life of flourishing as you best understand it?
B. Identify either a real-world or imagined example of a carebot – or,
perhaps, choose examples of “virtual assistants,” such as Apple’s Siri,
Amazon’s Alexa, Google Voice, etc. – and/or the holographic robot
now available from Gatebox (www.youtube.com/watch?
v=nkcKaNqfykg).
As you imagine and/or actually interact with one or more of these
devices –
(i) which of the important virtues you have listed come into play and
thus are practiced and perhaps improved upon?
(ii) which of these virtues are not reinforced – and/or may be
countered by other forms of practice that interaction with such
assistants require?
(iii) Given your responses to the above – is Vallor (along with Turkle,
Hildebrandt, and now many others in the “tech world”) onto
something with the concern about ethical deskilling? Why – and/or
why not?
6. Confucian ethics
Confucian thought begins with a very different understanding of the
human being than that held in modern Western theories.
Modern Western thought tends strongly to assume that human beings
are “atomic” individuals – that is, that the human being as an
individual is the most basic element or component of society, one that
begins and can remain in complete solitude from others. (This
atomism is traceable to the English philosopher Thomas Hobbes and
the French philosopher René Descartes, but that story is too long to
develop here.) Henry Rosemont (2006) has characterized this as the
“peach-pit” view of human beings. That is, a peach presents us with a
surface – one that grows, changes, and finally dies over time. But,
underneath these surface changes, is the peach-pit – a stony, hard core
that remains (relatively) unchanged over time. The peach-pit is thus
closely analogous to traditional Christian and Islamic conceptions of
the soul and modern conceptions of the atomistic self. That is,
underlying a surface body that grows, changes, and ultimately dies
with time there is thought to be the “real” self, the identity that
remains the same through time, “underneath” the outward and surface
appearances of the mortal body. To be sure, this conception of the self
resolves some important philosophical and ethical problems
concerning identity – for example, if there is no substantive, real self
underneath the constant changes of a body, then who or what is
responsible for that body’s actions? That is, if the body associated with
“you” committed a terrible crime five years ago, is it reasonable to say
something like “that wasn’t really me – I [meaning, my body] have
changed and can no longer be held responsible for what I [my body]
did five years ago?” Generally, in the modern West, we do think that
individuals remain responsible for their acts through time; thinking
this way makes sense on the assumption of a “peach-pit” or atomistic
self/identity that remains more or less the same over the life-course.
Such a conception of the self, however, can be understood as the result
of a long development in Western societies. As we have seen, Foucault
as well as Medium Theory affiliate this conception with writing as a
“technology of the self” (1987, 1988). This conception is amplified and
“democratized” – that is, made accessible to ever-expanding numbers
of people – with the development of the printing press as the (then)
new media technology that helped fuel the Protestant Reformation
and the Protestant emphasis on the individual soul and salvation.
These conceptions are then philosophically refined and secularized in
figures such as Descartes. Making real such a conception of the self
further appears to depend on the wealth generated through
industrialization. (As we have seen in the discussion of privacy, such a
conception of the self, while initially alien to such Eastern societies as
China, Japan, and Thailand, is becoming increasingly apparent there –
in part, as these societies develop the wealth that make individual
privacy realizable, e.g., through the luxury of private rooms for
children, etc.)
By contrast, in classical Confucian thought (and elsewhere, as we have
seen), human beings are understood first of all as relational beings:
we are who we are always and only as we are taken up in specific
relationships with others. For me, this means that I am always – and
only – someone’s son, brother, spouse, father, uncle, friend, employee,
boss, beneficiary, etc.; and how I am – i.e., my choices, attitudes,
behaviors, etc. – is always shaped in specific ways by each specific
relationship. And so, how I am in relationship with my parents is
different from how I am in relationship with my spouse, my siblings,
my own children, my students, etc. To continue with Henry
Rosemont’s (2006) organic metaphors, in classical Chinese thought,
human beings are like onions, not peaches: each of our distinctive
relationships with others – including the larger social and political
communities and, finally, the natural order at large (Tian) –
constitutes one of the multiple layers that in turn make up who we are
as human beings. In contrast with the peach-pit model, however, if we
remove the layers of relationship from the onion, there’s nothing left.
In ways closely analogous to the virtue ethics in the West, this
understanding of the human being as a relational being means that
ethics is primarily about becoming a (more) complete human being –
first of all, by cultivating the behaviors and attitudes required for
establishing harmony both among members of the human community
(beginning with the family) and with the larger order (Tian) as such.
In classical Confucian thought, this begins with learning and
practicing filial piety, respect and care for one’s parents, and ritual
propriety. But the ultimate aim is to become an exemplary person
(junzi) – someone who has cultivated and practiced appropriate
attention to and care for others to such a degree that this exemplary
behavior is who that person is. So Confucius describes the exemplary
person as follows:
The Master said, “Having a sense of appropriate conduct (yi) as one’s
basic disposition (zhi), developing it in observing ritual propriety (li),
expressing it with modesty, and consummating it in making good on
one’s word (xin): this then is an exemplary person (junzi).”
(15.18; Ames and Rosemont 1998, 188)
The exemplary person, in short, is one who has shaped his or her basic
character or disposition through the practice of appropriate conduct
and ritual propriety. The primary markers of such a character are
modesty and integrity.
Much as Socrates and Aristotle emphasized achieving human
excellence through cultivating and practicing the right habits
throughout one’s lifetime, Confucian ethics emphasizes that the
project of becoming an exemplary person (always in relationship with
others) is a life-long project. As one of the most famous of the Analects
has it:
At fifteen my heart-and-mind was set on learning.
At thirty my character had been formed.
At forty I had no more perplexities.
At fifty I realized the propensities of tian (T’ian-ming).
At sixty I was at ease with whatever I heard.
At seventy I could give my heart-and-mind free rein without
overstepping the boundaries.
(2:4; Ames and Rosemont 1998, 76f.)
This is to say, for Confucius, cultivating the virtues or excellences,
beginning with filial piety, leads to a sense of harmony or resonant
relationship both with other human beings and with the larger order
of things – a sort of freedom and contentment that can be achieved in
no other way.
And because, finally, it is believed that such ultimate freedom and
contentment can be achieved only through the cultivation of
excellence as a human being, we are always mistaken when we believe
we will achieve happiness through other means instead, such as wealth
and honor. So, just as Socrates and Aristotle later emphasized the
importance of putting such human excellence first, in the same way
Confucius insists that such excellence or virtue – for Confucius,
following the proper dao or path – must always come first:
The Master said, “Wealth and honor are what people want, but if they
are the consequence of deviating from the way (dao), I would have no
part of them. Poverty and disgrace are what people deplore, but if they
are the consequence of staying on the way, I would not avoid them.”
(4.5; Ames and Rosemont 1998, 90; see also 4.11)
Confucian ethics and digital media: sample applications
We have seen that Confucian ethics is at the center of a major conflict
between Western and Eastern attitudes and practices regarding
copyright (chapter 3). As a reminder, within a Confucian framework,
an exemplary person, as benevolent toward others, would want to
share the important insights that have allowed him or her to become
such a person with others who likewise seek such excellence. Hence,
the text he or she produces to record such insights is seen not
primarily as a matter of personal property, but rather as a gift to be
given to others – one that, indeed, may work as a kind of essential
toolkit for the larger life-project of becoming an exemplary person.
The appropriate response of those benefiting from this gift might
include copying it and giving it to others – first of all, as a mark of
respect and gratitude for the work of the exemplary person. In this
light, copying and distributing a text is not principally a matter of
violating one’s personal property as articulated in terms of copyright
limitations; it is rather a matter of showing respect and gratitude for
the gift of a benevolent master.
More recently, Pak-hang Wong (2013) has applied Confucian thought
to a range of ethical issues affiliated with Web 2.0 technologies and
venues, including Social Networking Sites (SNSs). His “Confucian
Social Media: An Oxymoron?” addresses conflicts between Confucian
values and those ostensibly embedded in the design of the
contemporary internet and Web. Endorsing a Confucian virtue ethics
approach, Wong provides three recommendations for practices (or
virtues), beginning with “A skilful engagement with social media” that
includes careful use of privacy settings and techniques such as “social
steganography” (as in boyd 2010) for sustaining a strong sense of who
one’s audience is (Wong 2013, 293). His further recommendations – a
“reinvigoration of rites in the online world” and “prioritisation of the
offline world” – likewise seek to sustain Confucian virtues in our use of
social media (ibid., 293f.).
In this direction, Wong’s point that “social media can only be viewed
as a supplement of the offline world” (ibid., 294) is a comparatively
early argument in the direction of a postdigital era – and in keeping
with the now very extensive chorus of voices arguing precisely for a
ratcheting down of our screen and online time in favor of more real-
world engagements and relationships. Finally, Wong further adds two
(re)design recommendations, namely “Designing contextual
awareness into social media” and “(re)introduction of role
responsibility into social media” (ibid., 294f.)
Subsequently, the literature of Confucian approaches to contemporary
technologies has developed considerably. For example, Wong’s
analyses are somewhat countered by Tom Wang (2016). Specifically,
Wang argues that a (re)design of SNSs as guided by tian xia, “a basic
structuring principle of Confucian philosophy,” would thereby bring
SNS into alignment with Confucian thought – that is, as they would
then offer and foster a “moral space in which all inhabitants of the
world are thought to participate as members – as different individuals
– who are moral equals” (2016, 240).5
To date, the most extensive application of Confucian thought to life in
a technological era is Shannon Vallor’s review of Confucian tradition
and virtues (2016, 37–9) and then their synthesis within the more
global technomoral virtues: as we have seen, these are then applied to
an array of digital media, including SNSs and carebots.
7. African perspectives
Colleagues engaged in the global dialogues on information and
computing ethics represent a number of important linguistic/cultural
domains – certainly Western perspectives (the US, the UK, Australia,
Northern and Southern Europe, including Scandinavia) as well as
Asian perspectives (including China, Japan, Thailand, India). Early
on, there was comparatively less representation and participation (at
least in the English-language literature) from Latin American
countries and Africa. But, most fortunately, this has begun to change.
Uruguayan-born Rafael Capurro, for example, has been a pivotal
figure in both Spanish- and English-language information and
computing ethics (e.g., Capurro 2012).
At the same time, African thinkers have become more engaged in these
global dialogues – sparked in part by the first African Information
Ethics conference, held in Pretoria, South Africa, in February 2007. In
his opening address to the conference, Rafael Capurro (2007, 6)
emphasized the importance of ubuntu as an indigenous philosophical
tradition and framework for developing an information ethics
appropriate to the African context. As we saw in an introductory way
in the discussion of Open Source and FLOSS (chapter 3), ubuntu (as
inspiring the popular Ubuntu distribution of Linux) emphasizes that
we are human beings in and through our relationships with other
human beings: “to be human is to affirm one’s humanity by
recognizing the humanity of others and, on that basis, establish
humane respectful relations with them” (Ramose 2002, 644; cited in
Capurro 2007, 6; cf. Capurro 2012, 120f.). While not all peoples and
traditions in Africa recognize the term ubuntu, this notion of being
human as involving an intrinsic interrelationship with and
interdependence upon others is widely characteristic of African
thought. So Barbara Paterson has observed that, “In African
philosophy, a person is defined through his or her relationships with
other persons, not through an isolated quality such as rationality
[Menkiti 1979; Shutte 1993]” (Paterson 2007, 157; emphasis added).
And just as Confucian thought, in beginning with the person as a
relational being, thereby stresses interaction with the larger
community (both human and natural), so, Paterson continues, in
African thought, in community, “Through being affirmed by others
and through the desire to help and support others, the individual
grows, personhood is developed, and personal freedom comes into
being” (ibid., 158). This means that personhood is not a given, but
rather an ongoing project: “African thought sees a person as a being
under construction whose character changes as the relations to other
persons change. To grow older means to become more of a person
and more worthy of respect” (ibid.; emphasis added). Again, given this
concept of the individual, engagement with the community is
paramount: “The individual belongs to the group and is linked to
members of the group through interaction; conversation and dialogue
are both purpose and activity of the community” (ibid.).
What kind of person we are to become is articulated by no less a moral
authority than Archbishop Desmond Tutu:
When we want to give high praise to someone we say, Yu, u nobuntu;
hey, so-and-so has Ubuntu. Then you are generous, you are
hospitable, and you are friendly and caring and compassionate. You
share what you have. It is to say, my humanity is caught up, is
inextricably bound up, in yours. We belong in a bundle of life. We say
a person is a person through other persons. It is not I think therefore I
am. It says rather: I am human because I belong, I participate, and I
share. A person with Ubuntu is open and available to others, affirming
of others, does not feel threatened that others are able and good, for he
or she has a proper self-assurance that comes from knowing that he or
she belongs in a greater whole and is diminished when others are
humiliated or diminished, when others are tortured or oppressed, or
treated as if they were less than who they are.
(www.tutufoundationusa.org/desmond-tutu-peace-foundation)
http://www.tutufoundationusa.org/desmond-tutu-peace-foundation
In other terms, ubuntu involves the project of acquiring and practicing
certain virtues, including a strong sense of interconnectedness with
one’s larger community and the states and fates of others in that
community – in part as this contributes to the virtue of “proper self-
assurance.”
Alongside their rich distinctiveness, in these ways African traditions
closely parallel both Confucian thought and Aristotelian virtue ethics,
beginning with their shared emphasis on the individual human being
as first of all engaged with the larger human (and natural)
communities, for the sake of both individual and community harmony
and flourishing. Hence, Confucian and Aristotelian approaches may
provide helpful analogues for African thinkers as they explore and
develop their own forms of information and computing ethics. But all
of this is still emerging: it will be very interesting to see where African
philosophers and users of digital media take us – both for their own
sake, and for the sake of the larger global dialogue regarding ICE and
digital ethics more generally.
Applications
For a host of reasons – beginning with the effects and consequences of
centuries of Western colonialism – a good deal of recent work on
information ethics in African contexts focuses on urgent matters of
development, including ICT4D (ICT for development), justice, and
digital literacy.
Coetzee Bester and Beverley Malan (2016) Information Ethics in
Africa: Curriculum Design and Implementation, Innovation: Journal
of Appropriate Librarianship and Information Work in Southern
Africa (52): 19–35.
A subsequent development of the first African Information Ethics
conference in 2007 was the establishment of the Africa Centre of
Excellence for Information Ethics (ACEIE) in 2012. One of the aims of
the ACEIE is to develop a Curriculum Framework for information
ethics. This article describes components of the Framework and its
possible contributions to information ethics as well as to “the
development of Africa as a globally competitive information and
knowledge society.”
Liezel Cilliers (2017) Evaluation of Information Ethical Issues among
Undergraduate Students: An Exploratory Study, South African
Journal of Information Management, 19(1): 1–5.
Perhaps not surprisingly, plagiarism is a common problem among
young adults in higher education – including Africa. The author
recommends that “information ethics must be included in the
undergraduate curriculum in order to prepare students to deal with
these ethical problems” (2017, 1).
Koliwe Majama (2018) Exploring Africa’s Digitalisation Agenda in the
Context of Promoting Civil Liberties. Keynote address to
“Digitalisation in Africa: Interdisciplinary Perspectives on Technology,
Development, and Justice,” International Center for Ethics in the
Sciences and Humanities (IZEW), Tübingen, Germany, 26–27 Sept.
2018,
https://drive.google.com/file/d/1gz3gagSG3TwwwmLZMdSl8APN3EIY0Get/view
Majama observes that (deontological) interests in human rights,
including a right to internet access articulated in the African
Declaration on Internet Rights and Freedoms, as well as matters of
justice and gender equality, are overshadowed in African contexts by
more (utilitarian) profit interests, patterns of despotic regimes using
internet shutdowns to quell dissent, and the central problem of
fulfilling basic needs via jobs and education vs. higher-level interests
in rights.
REFLECTION/DISCUSSION/WRITING QUESTIONS: ETHICS AND META-ETHICS
Now that you have reviewed a global range of ethical frameworks,
review one or two of the specific issues/cases of digital media ethics
that you have analyzed and perhaps resolved with some care in the
course of working through this volume.
1. Which of the ethical frameworks that we have now explored, i.e.,
utilitarianism
deontology
https://drive.google.com/file/d/1gz3gagSG3TwwwmLZMdSl8APN3EIY0Get/view
feminist ethics / ethics of care
virtue ethics
Confucian ethics
African ethics
seem(s) to have been most in play in your reflections and decision-
making? Explain your response here with some care, making clear for
yourself (and your reader, if applicable) how your analyses and
resolutions fit the patterns and approaches of a given ethical
framework.
2. Choose a framework that seems very far away from your own ethical
starting points (identified in [1]). Take up this same ethical issue and,
as best you can, provide an analysis and resolution of the issue using
this alternative framework. How far are the results similar to and/or
different from the results using your original ethical theory/theories?
3. How do you respond to these differences? That is, given what we’ve
now learned about
ethical relativism
ethical monism/absolutism
ethical pluralism
which of these three meta-ethical frameworks are you most likely/able
to apply to any differences that may emerge between the analyses and
resolutions you have developed in (1) and (2)?
SUGGESTED RESOURCES FOR FURTHER RESEARCH/ REFLECTION/WRITING
INTERCULTURAL INFORMATION ETHICS (IIE)
Bielby, Jared (2015) Comparative Philosophies in Intercultural
Information Ethics, Confluence: Online Journal of World
Philosophies, 2: 233–53.
Bielby offers a comprehensive overview of the emergence and
development of IIE with application to key issues such as privacy and
pluralism. The article is a gateway into the many standard references
and figures in the field, most especially the extensive work of Rafael
Capurro.
Wong, Pak-hang (2012) Dao, Harmony and Personhood: Towards a
Confucian Ethics of Technology, Philosophy and Technology, 25(1):
67–86.
Wong provides a more general introduction and discussion of basic
Confucian concepts and how they may contribute to a specifically
Confucian ethics of technology.
ETHICAL RELATIVISM AND PLURALISM
Floridi, Luciano (2007) Global Information Ethics: The Importance of
Being Environmentally Earnest, Journal of Technology and Human
Interaction, 3(3): 1–11.
Floridi – the author of a widely influential framework for information
and computing ethics – takes up here the specific challenges of
cultural diversity to a global ICE. He argues specifically for an ethical
pluralism in the form of what he calls a “lite” information ontology.
Mackenzie, Catriona (2008) Relational Autonomy, Normative
Authority and Perfectionism, Journal of Social Philosophy, 39(4):
512–33.
Mackenzie is a leading expositor of a feminist understanding of
relational autonomy. Here she refines her earlier accounts to offer a
“weak substantive, relational approach to autonomy that grounds an
agent’s normative authority over decisions of import to her life in her
practical identity and in relations of intersubjective recognition” (512)
– in part as such autonomy is central to a life of flourishing, that is, the
overriding focus of a virtue ethics (529). Mackenzie’s account of
relational autonomy further includes a “value pluralism.”
Notes
1 Specifically, in the subsequent decades of the Cold War, the world
has barely escaped massive nuclear annihilation – read: hundreds
of millions of human lives lost immediately, not to mention even
more extensive and long-term devastation of the larger
environment. And this happened more than once, and sometimes
only by dint of remarkable courage and the willingness to rely on
one’s human judgment rather than what early warning systems and
computer analyses claimed: as in the example of the Soviet Lt.
Colonel Stanislav Petrov in September, 1983 (Lewis et al. 2014, 13).
How would these possibilities, coupled with some degree of
probability, figure into the hedonic calculus?
2 Plato references are to the Stephanus volume and page number.
3 See note 4, below.
4 A further and very great complication in these debates results from
the complex ways in which feminism has unfolded over the past
four decades or so – i.e., from the “second wave” feminism of the
1960s and 1970s through third-wave and then post-feminism,
and/or a “‘fourth wave’ social media-based feminist activism” or
perhaps a “post-post-feminism” (Gill 2016, 613). Broadly, these
developments have involved dramatic shifts from strong opposition
to pornography* (as objectifying women and contributing to their
subjugation) to an embrace of both production and consumption of
pornography* as part of women’s choice, celebration of their
bodies, and taking control of their own sexualities – and then to
further critique of sexism and patriarchy. In particular, the
“#freethenipple” campaign appropriates the tropes of pornography
in order to protest against patriarchy (Rúdólfsdóttir and
Jóhannsdóttir 2018). At the same time, other feminists object that
doing so only reinforces patriarchal gender stereotypes and does
little for furthering women’s emancipation and equality (Matich,
Ashman, and Parsons 2018). For the sake of relative simplicity in
this introduction, I can only point to these developments and
complications as frameworks and issues for further research and
reflection.
(My very great thanks to Professor Amanda Karlsson, Aarhus
University, for her invaluable help here, including the reference to
Gill [2016].)
5 My very great thanks to Pak-Hang Wong for these and additional
suggestions.
References
Aarseth, Espen (2015) Meta-game Studies, Game Studies 15 (1: July),
http://gamestudies.org/1501/articles/editorial.
Abidin, Crystal (2018) Young People and Digital Grief Etiquette, pp.
160–74 in Zizi Papacharissi (ed.), A Networked Self and Birth, Life,
and Death. New York: Routledge.
Abramson, Jeffrey, Christopher Arterton, and Gary Orren (1988) The
Electronic Commonwealth: The Impact of New Media
Technologies on Democratic Politics. New York: Basic Books.
ACM (Association for Computing Machinery) (2018) Code of Ethics
and Professional Conduct, www.acm.org/code-of-ethics.
Adams, Carol J. (1996) “This Is Not Our Fathers’ Pornography”: Sex,
Lies, and Computers, pp. 147–70 in Charles Ess (ed.), Philosophical
Perspectives on Computer-Mediated Communication. Albany:
State University of New York Press.
Akemu, Ona, Gail Whiteman, and Steve Kennedy (2016) Social
Enterprise Emergence from Social Movement Activism: The
Fairphone Case, Journal of Management Studies 53: 5. DOI:
10.1111/joms.12208.
Albrechtslund, A. (2008) Online Social Networking as Participatory
Surveillance, First Monday, 13(3),
http://firstmonday.org/article/view/2142/1949.
Alexander, Leigh (2009) And You Thought Grand Theft Auto Was
Bad: Should the United States Ban a Japanese “Rape Simulator”
Game? Slate, March 9,
https://slate.com/technology/2009/03/should-the-united-states-
ban-rapelay-a-japanese-rape-simulator-game.html.
Ames, Roger, and Henry Rosemont, Jr. (1998) The Analects of
Confucius: A Philosophical Translation. New York: Ballantine
http://gamestudies.org/1501/articles/editorial
http://www.acm.org/code-of-ethics
http://firstmonday.org/article/view/2142/1949
https://slate.com/technology/2009/03/should-the-united-states-ban-rapelay-a-japanese-rape-simulator-game.html
Books.
Attwood, Feona (2018) Sex Media. Cambridge: Polity.
Aufderheide, Pat, Aram Sinnreich, Maggie Clifford, and Saif Shahin
(under review) Access Shrugged: The Decline of the Copyleft and
the Rise of Pragmatic Openness. Submitted to Information,
Communication and Society.
Bäcke, Maria (2011) Make-Believe and Make-Belief in Second Life
Role-Playing Communities, Convergence: The International
Journal of Research into New Media Technologies 18(1): 85–92.
DOI: 10.1177/1354856511419917.
Baron, Naomi (2008) Always On: Language in an Online and Mobile
World. Oxford University Press.
Baumol, William J., and Alan S. Blinder (2011) Economics: Principles
and Policy (12th edn.). Mason, OH: South-Western Cengage
Learning.
Becker, Barbara (2001) The Disappearance of Materiality?, pp. 58–77
in V. Lemecha and R. Stone (eds.), The Multiple and the Mutable
Subject. Winnipeg: St. Norbert Arts Centre.
Benhabib, Seyla (1986) Critique, Norm, and Utopia: A Study of the
Foundations of Critical Theory. New York: Columbia University
Press.
Berbers, Yolande, Willem Debeuckelaere, Paul De Hert, et al. (2018)
Privacy in an Age of the Internet, Social Networks and Big Data.
Brussels: KVAB,
www.kvab.be/sites/default/rest/blobs/1501/tw_privacy_en .
Berry, David (2014) Post-Digital Humanities: Computation and
Cultural Critique in the Arts and Humanities, Educause Review,
May 19, http://er.educause.edu/articles/2014/5/postdigital-
humanities-computation-and-cultural-critique-in-the-arts-and-
humanities.
Bielby, Jared (2015) Comparative Philosophies in Intercultural
http://www.kvab.be/sites/default/rest/blobs/1501/tw_privacy_en
http://er.educause.edu/articles/2014/5/postdigital-humanities-computation-and-cultural-critique-in-the-arts-and-humanities
Information Ethics, Confluence: Online Journal of World
Philosophies 2: 233–53.
Birkner, Christine (2017) From Monopoly to Exploding Kittens, Board
Games Are Making a Comeback, Adweek, April 3,
www.adweek.com/brand-marketing/from-monopoly-to-exploding-
kittens-board-games-are-making-a-comeback.
Bleaney, Rob (2012) Amanda Todd: Suicide Girl’s Mum Reveals More
Harrowing Details of Cyber Bullying Campaign that Drove her
Daughter to her Death, Daily Mirror, October 15,
www.mirror.co.uk/news/world-news/amanda-todd-suicide-girls-
mum-1379909.
Boateng, Boatema (2011) Whose Democracy? Rights-based Discourse
and Global Intellectual Property Rights Activism, pp. 261–74 in
Robin Mansell and Marc Raboy (eds.), The Handbook of Global
Media and Communication Policy. Oxford: Wiley-Blackwell.
Boss, Judith (2013) Analyzing Moral Issues (6th edn.). Boston:
McGraw-Hill.
boyd, danah (2010) Social Steganography: Learning to Hide in Plain
Sight, Digital Media and Learning, Connected Learning Alliance,
August 23, https://clalliance.org/blog/social-steganography-
learning-to-hide-in-plain-sight.
boyd, danah, and Alice Marwick (2011) Social Privacy in Networked
Publics: Teens’ Attitudes, Practices, and Strategies, pp. 1–29 in
Proceedings of the “A Decade in Internet Time: OII Symposium on
the Dynamics of the Internet and Society,” September 21–24, 2011,
University of Oxford, http://papers.ssrn.com/sol3/papers.cfm?
abstract_id=1925128.
Braidotti, Rosi (2006) Transpositions: On Nomadic Ethics.
Cambridge: Polity.
Bromseth, Janne, and Jenny Sundén (2011) Queering Internet
Studies: Intersections of Gender and Sexuality, pp. 270–99 in Mia
Consalvo and Charles Ess (eds.), The Handbook of Internet Studies.
http://www.adweek.com/brand-marketing/from-monopoly-to-exploding-kittens-board-games-are-making-a-comeback
http://www.mirror.co.uk/news/world-news/amanda-todd-suicide-girls-mum-1379909
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1925128
Oxford: Wiley-Blackwell.
Brownlee, Kimberley (2017) Civil Disobedience, in Edward N. Zalta
(ed.), The Stanford Encyclopedia of Philosophy (Fall 2017),
https://plato.stanford.edu/archives/fall2017/entries/civil-
disobedience.
Buchanan, Elizabeth, and Kathrine Andrews Henderson (2008) Case
Studies in Library and Information Science Ethics. Jefferson, NC:
McFarland.
Bunz, Mercedes, and Graham Meikle (2018) The Internet of Things.
Cambridge: Polity.
Burgess, Matt (2018) How to Secure Gmail and Stop Developers
Reading Your Messages, Wired, July 8,
www.wired.co.uk/article/gmail-security-checkup-apps-data.
Burgess, J. Peter, Luciano Floridi, Aurélie Pols, and Jeroen van den
Hoven (2018) EDPS Ethics Advisory Group. Report 2018,
https://edps.europa.eu/data-protection/our-work/our-work-by-
type/ethical-framework_fr.
Burk, Dan (2007) Privacy and Property in the Global Datasphere, pp.
94–107 in Soraj Hongladarom and Charles Ess (eds.), Information
Technology Ethics: Cultural Perspectives. Hershey, PA: Idea Group
Reference.
BusinessGhana (2018) Work on $30m E-waste Recycling Facility at
Agbogbloshie to Begin This Year (August 13),
www.businessghana.com/site/news/business/170330/Work-on-
$30m-e-waste-recycling-facility-at-Agbogbloshie-to-begin-this-
year.
Bynum, Terrell Ward (2000) A Very Short History of Computer
Ethics, Newsletter on Philosophy and Computers (American
Philosophical Association),
www.cs.utexas.edu/~ear/cs349/Bynum_Short_History.html
Campbell, Heidi (2017) Religious Communication and Technology,
https://plato.stanford.edu/archives/fall2017/entries/civil-disobedience
http://www.wired.co.uk/article/gmail-security-checkup-apps-data
https://edps.europa.eu/data-protection/our-work/our-work-by-type/ethical-framework_fr
http://www.businessghana.com/site/news/business/170330/Work-on-$30m-e-waste-recycling-facility-at-Agbogbloshie-to-begin-this-year
http://www.cs.utexas.edu/~ear/cs349/Bynum_Short_History.html
ICA Annals of Communication 41(3–4): 228–34. DOI:
10.1080/23808985.2017.1374200.
Capurro, Rafael (2005) Privacy: An Intercultural Perspective, Ethics
and Information Technology 7(1): 37–47.
(2007) Information Ethics for and from Africa, IRIE International
Review of Information Ethics 7(09), www.i-r-i-
e.net/inhalt/007/01-capurro .
(2008) Intercultural Information Ethics, pp. 639–65 in Kenneth Einar
Himma and Herman T. Tavani (eds.), The Handbook of
Information and Computer Ethics. Hoboken, NJ: Wiley.
(2012) Intercultural Aspects of Digitally Mediated Whoness, Privacy
and Freedom, pp. 113–22 in Johannes Buchmann (ed.), Internet
Privacy: Eine multidisziplinäre Bestandsaufnahme / A
Multidisciplinary Analysis. Berlin: Deutsche Akademie der
Technikwissenschaften.
Carey, James (1989) Communication as Culture: Essays on Media
and Society. Boston: Unwin Hyman.
Cascone, Kim (2000) The Aesthetics of Failure: “Post-Digital”
Tendencies in Contemporary Computer Music, Computer Music
Journal 24(4: Winter): 12.
Chan, Joseph (2003) Confucian Attitudes towards Ethical Pluralism,
pp. 129–53 in Richard Madsen and Tracy B. Strong (eds.), The
Many and the One: Religious and Secular Perspectives on Ethical
Pluralism in the Modern World. Princeton University Press.
Chen, Zhen Troy, and Ming Cheung (2018) Privacy Perception and
Protection on Chinese Social Media: A Case Study of WeChat,
Ethics and Information Technology: 1–11. DOI: 10.1007/s10676-
018-9480-6.
Cheong, Pauline Hope, Judith N. Martin, and Leah P. Macfadyen
(2012) New Media and Intercultural Communication: Identity,
Community and Politics. Oxford: Peter Lang.
http://www.i-r-i-e.net/inhalt/007/01-capurro
Christman, John (2004) Relational Autonomy, Liberal Individualism,
and the Social Constitution of Selves, Philosophical Studies: An
International Journal for Philosophy in the Analytic Tradition
117(1/2): 143–64.
CNN (2012) Massacre at Batman Premier, CNN transcripts, July 20,
http://transcripts.cnn.com/TRANSCRIPTS/1207/20/cnr.01.html.
CNN (2018) UK Phone Hacking Scandal Fast Facts, May 2,
https://edition.cnn.com/2013/10/24/world/europe/uk-phone-
hacking-scandal-fast-facts/index.html.
Coeckelbergh, Mark (2017) New Romantic Cyborgs: Romanticism,
Information Technology, and the End of the Machine. London:
MIT Press.
Cohen, Julie E. (2012) Configuring the Networked Self: Law, Code,
and the Play of Everyday Practice. New Haven, CT: Yale University
Press, www.juliecohen.com/page5.php.
Coleman, Kari Gwen (2001) Android Arête: Toward a Virtue Ethic for
Computational Agents, Ethics and Information Technology 3(4):
247–65.
Confessore, Nicolas (2018) Cambridge Analytica and Facebook: The
Scandal and the Fallout So Far, New York Times, April 4,
www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-
scandal-fallout.html.
Consalvo, Mia (2012) Confronting Toxic Gamer Culture: A Challenge
for Feminist Game Studies Scholars, Ada: Journal of Gender, New
Media and Technology 1, http://adanewmedia.org/2012/11/issue1-
consalvo.
Conway, Paul and Bertram Gawronski (2013) Deontological and
Utilitarian Inclinations in Moral Decision Making: A Process
Dissociation Approach, Journal of Personality and Social
Psychology 104: 216–35.
Couldry, Nick (2012) Media, Society, World: Social Theory and
http://transcripts.cnn.com/TRANSCRIPTS/1207/20/cnr.01.html
https://edition.cnn.com/2013/10/24/world/europe/uk-phone-hacking-scandal-fast-facts/index.html
http://www.juliecohen.com/page5.php
http://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
http://adanewmedia.org/2012/11/issue1-consalvo
Digital Media Practice. Cambridge: Polity.
Council of Europe (1950) The European Convention on Human
Rights, www.hri.org/docs/ECHR50.html.
Cox-George, Chantal, and Susan Bewley (2018) I, Sex Robot: The
Health Implications of the Sex Robot Industry, BMJ Sexual and
Reproductive Health 44: 161–4.
Cuthbertson, Anthony (2018) Hacked Sex Robots Could Murder
People, Security Expert Warns, Newsweek, January 1,
www.newsweek.com/hacked-sex-robots-could-murder-people-
767386.
Dahlberg, Lincoln (2017) Cyberlibertarianism, in Oxford Research
Encyclopedia of Communication. DOI:
10.1093/acrefore/9780190228613.013.70.
Daily Mail Reporter (2012) Are We Creating a Generation of
Murderers? Shoot ’Em Ups Such as Call of Duty “Train” Gamers to
Shoot Real Guns – and Hit Victims in the Head, May 1,
www.dailymail.co.uk/sciencetech/article-2137757/Are-creating-
generation-murderers-Shootem-ups-train-gamers-shoot-real-guns-
accurately-hit-victims-head, html#ixzz2DsrpAbjh.
Danaher, John, Sven Nyholm, and Brian D. Earp (2018) The
Quantified Relationship, The American Journal of Bioethics 18(2):
3–19, DOI: 10.1080/15265161.2017.1409823.
Davis, Nicola (2018) Claims about Social Benefits of Sex Robots
Greatly Overstated, Say Experts, The Guardian, 4 June 2018,
www.theguardian.com/science/2018/jun/04/claims-about-social-
benefits-of-sex-robots-greatly-overstated-say-experts?
Davisson, Amber, and Paul Booth (eds.) (2016) Controversies in
Digital Ethics. London: Bloomsbury Academic.
Dean, Jodi (2009) Democracy and Other Neoliberal Fantasies:
Communicative Capitalism and Left Politics. Durham, NC: Duke
University Press.
http://www.hri.org/docs/ECHR50.html
http://www.newsweek.com/hacked-sex-robots-could-murder-people-767386
http://www.dailymail.co.uk/sciencetech/article-2137757/Are-creating-generation-murderers-Shootem-ups-train-gamers-shoot-real-guns-accurately-hit-victims-head
http://www.theguardian.com/science/2018/jun/04/claims-about-social-benefits-of-sex-robots-greatly-overstated-say-experts?
Debatin, Bernhard (ed.) (2007) The Cartoon Debate and the Freedom
of the Press: Conflicting Norms and Values in the Global Media
Culture / Der Karikaturenstreit und die Pressefreiheit: Wert und
Normenkonflikte in der globalen Medienkultur. Berlin: LIT.
(2011) Ethics, Privacy, and Self-Restraint in Social Networking, pp.
47–60 in S. Trepte and L. Reinecke (eds.), Privacy Online. Berlin:
Springer.
DeCew, Judith W. (1997) The Pursuit of Privacy: Law, Ethics, and the
Rise of Technology. Ithaca, NY: Cornell University Press.
Descartes, René ([1637] 1972) Discourse on Method, pp. 81–130 in
The Philosophical Works of Descartes, trans E. S. Haldane and G.
R. T. Ross, Vol. I. Cambridge University Press.
Dewey, Caitlin (2014) The Only Guide to Gamergate You Will Ever
Need to Read. The Washington Post, October 14,
www.washingtonpost.com/news/the-
intersect/wp/2014/10/14/the-only-guide-to-gamergate-you-will-
ever-need-to-read.
Dibbell, Julian ([1993] 2012) A Rape in Cyberspace,
DORS4 (Death Online Research Symposium 4) (2018) University of
Hull, UK, August 15-17, http://cc.au.dk/en/research/research-
programmes/cultural-transformations/cultures-and-practices-of-
death-and-dying/dorn/death-online-research-symposium-4.
Drotner, Kirsten (1999) Dangerous Media? Panic Discourses and
Dilemmas of Modernity, Paedagogica Historica: International
Journal of the History of Education 35(3): 593–619. DOI:
10.1080/0030923990350303.
Duhigg, Charles (2012) How Companies Learn Your Secrets, New
York Times, February 16,
www.nytimes.com/2012/02/19/magazine/shopping-habits.html.
Eickelman, Dale F. (2003) Islam and Ethical Pluralism, pp. 161–80 in
http://www.washingtonpost.com/news/the-intersect/wp/2014/10/14/the-only-guide-to-gamergate-you-will-ever-need-to-read
http://cc.au.dk/en/research/research-programmes/cultural-transformations/cultures-and-practices-of-death-and-dying/dorn/death-online-research-symposium-4
http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html
Richard Madsen and Tracy B. Strong (eds.), The Many and the
One: Religious and Secular Perspectives on Ethical Pluralism in
the Modern World. Princeton University Press.
Elshtain, Jean Beth (1982) Interactive TV: Democracy and the QUBE
Tube, The Nation, August 7–14: 108.
Ess, Charles (1995) Reading Adam and Eve: Re-Visions of the Myth of
Woman’s Subordination to Man, pp. 92–120 in Marie M. Fortune
and Carol J. Adams (eds.), Violence Against Women and Children:
A Christian Theological Sourcebook. New York: Continuum Press.
(1996) The Political Computer: Democracy, CMC, and Habermas, pp.
197–230 in C. Ess (ed.), Philosophical Perspectives on Computer-
Mediated Communication. Albany: State University of New York
Press.
(2005) “Lost in Translation?” Intercultural Dialogues on Privacy and
Information Ethics, Ethics and Information Technology 7(1): 1–6
[introduction to special issue on privacy and data privacy protection
in Asia].
(2006) Ethical Pluralism and Global Information Ethics, Ethics and
Information Technology 8(4): 215–26.
(2007) Cybernetic Pluralism in an Emerging Global Information and
Computing Ethics, International Review of Information Ethics
7(September), www.-i-r-i-e.net/inhalt/007/11-ess .
(2010) The Embodied Self in a Digital Age: Possibilities, Risks, and
Prospects for a Pluralistic (Democratic/Liberal) Future? Nordicom
Information 32(2): 105–18, www.nordicom.gu.se/?
portal=publ&main=info_publ2.php&ex=320.
(2011) Self, Community, and Ethics in Digital Mediatized Worlds, pp.
3–30 in C. Ess and M. Thorseth (eds.), Trust and Virtual Worlds:
Contemporary Perspectives. New York: Peter Lang.
(2012) At the Intersections between Internet Studies and Philosophy:
“Who Am I Online?” Philosophy and Technology. DOI:
http://www.nordicom.gu.se/?portal=publ&main=info_publ2.php&ex=320
10.1007/s13347-012-0085-4.
(2016) Ethical Approaches for Copying Digital Artifacts: What Would
the Exemplary Person [Junzi] / a Good Person [Phronemos] Say?,
pp. 295–313 in Darren Hick and Reinold Schmücker (eds.), The
Aesthetics and Ethics of Copying. London: Bloomsbury.
(2017a) God Out of the Machine? The Politics and Economics of
Technological Development, pp. 83–111 in A. Beavers (ed.),
Philosophy, Macmillan Interdisciplinary Handbooks. Farmington
Hills, MI: Macmillan Reference.
(2017b) Grounding Internet Research Ethics 3.0: A View from (the)
AoIR (Foreword), pp. ix–xv in Michael Zimmer and Katharina
Kinder-Kurlanda (eds.), Internet Research Ethics for the Social
Age: New Challenges, Cases, and Contexts. Berlin: Peter Lang.
(2018a) Afterword, pp. 264–277 in Amanda Lagerkvist (ed.), Digital
Existence: Ontology, Ethics and Transcendence in Digital Culture.
London: Routledge.
(2018b) Ethics in HMC: Recent Developments and Case Studies, pp.
237–57 in Andrea Guzman (ed.), Human–Machine
Communication: Rethinking Communication, Technology, and
Ourselves. Berlin: Peter Lang.
(2019) From the Digital to a Post-digital Era?, pp. 105–18 in Mireille
Hildebrandt and Kieron O’Hara (eds.), Life and the Law in the Era
of Data-Driven Agency. Northampton, MA: Edgar Elgar.
(in press) The Ethics of Mobile Communication: A Rough Guide. In
Rich Ling, Gerard Groggin, Leopoldina Fortunati, and Sun Sun Lim
(eds.), Oxford Handbook of Mobile Communication, Culture, and
Information.
Ess, Charles, and Ylva Hård af Segerstad (2019) Everything Old is New
Again: The Ethics of Digital Inquiry and its Design, pp. 179–96 in
Åsa Mäkitalo, Todd E. Nicewonger, and Mark Elam (eds.), Designs
for Experimentation and Inquiry: Approaching Learning and
Knowing in Digital Transformation. London: Routledge.
EDPS (European Data Protection Supervisor) (2015) Opinion 4/15:
Towards a New Digital Ethics.
https://edps.europa.eu/sites/edp/files/publication/15-09-
11_data_ethics_en #
European Union (1995) Directive 95/46/EC of the European
Parliament and of the Council of 24 October 1995, http://eur-
lex.europa.eu/LexUriServ/LexUriServ.do?
uri=CELEX:31995L0046:EN:HTML.
Facebook (2019) Facebook Reports Fourth Quarter and Full Year
2018 Results,
https://s21.q4cdn.com/399680738/files/doc_financials/2018/Q4/Q4-
2018-Earnings-Release .
Finnemann, Niels Ole (2005) Internettet i mediehistorisk perspektiv
[The Internet in Media-Historical Perspective]. Frederiksberg,
Denmark: Samfundslitteratur.
Floridi, Luciano (2005) The Ontological Interpretation of
Informational Privacy, Ethics and Information Technology 7(4):
185–200.
(2006) Four Challenges for a Theory of Informational Privacy, Ethics
and Information Technology 8(3): 109–19.
(2013) Distributed Morality in an Information Society, Science and
Engineering Ethics 19: 727. DOI: 10.1007/s11948-012-9413-4.
(ed.) (2015) The Onlife Manifesto: Being Human in a
Hyperconnected Era, London: Springer Open. (Freely
downloadable: www.springer.com/la/book/9783319040929.)
Floridi, Luciano, Josh Cowls, Monica Beltrametti et al. (2018) An
Ethical Framework for a Good AI Society: Opportunities, Risks,
Principles, and Recommendations, Minds and Machines 28(4):
689–707.
Foucault, M. (1987) The Ethic of Care for the Self as a Practice of
Freedom, pp. 1–20 in J. Bernhauer and D. Rasmussen (eds.), The
https://edps.europa.eu/sites/edp/files/publication/15-09-11_data_ethics_en
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:EN:HTML
https://s21.q4cdn.com/399680738/files/doc_financials/2018/Q4/Q4-2018-Earnings-Release
http://www.springer.com/la/book/9783319040929
Final Foucault. Cambridge, MA: MIT Press.
(1988) Technologies of the Self, pp. 16–49 in L. H. Martin, H. Gutman,
and P. Hutton (eds.), Technologies of the Self: A Seminar with
Michel Foucault. Amherst: University of Massachusetts Press.
Gabriels, Katleen (2016) “I Keep a Close Watch on this Child of Mine”:
A Moral Critique of Other-Tracking Apps, Ethics of Information
Technology 18: 175–84. DOI 10.1007/s10676-016-9405-1.
Gentile, Douglas A., Dongdong Li, Angeline Khoo, Sara Prot, and Craig
A. Anderson (2014) Mediators and Moderators of Long-term
Effects of Violent Video Games on Aggressive Behavior, JAMA
(Journal of the American Medical Association) Pediatrics 168(5):
450–7. DOI: 10.1001/jamapediatrics.2014.63.
GDPR (2016). General Data Protection Regulation (GDPR) Regulation
EU 2016/679. Approved April 27, 2016, implemented May 25,
2018, http://eur-lex.europa.eu/legal-content/EN/TXT/?
uri=CELEX:32016R0679.
Gehl, Robert W. (2016) Power/Freedom on the Dark Web: A Digital
Ethnography of the Dark Web Social Network, new media & society
18(7): 1219–35.
Gerber, Nina, Paul Gerber, and Melanie Volkamer (2018) Explaining
the Privacy Paradox: A Systematic Review of Literature
Investigating Privacy Attitude and Behaviour, Computers &
Security 77: 226–61.
Ghosh, S. (2006) The Troubled Existence of Sex and Sexuality:
Feminists Engage with Censorship, pp. 255–85 in B. Bose (ed.),
Gender and Censorship. New Delhi: Women Unlimited.
Gibson, William (1984) Neuromancer. New York: Ace Books.
Gill, Rosalind (2016) Post-postfeminism? New Feminist Visibilities in
Postfeminist Times, Feminist Media Studies, 16(4): 610–30. DOI:
10.1080/14680777.2016.1193293.
Gilligan, Carol (1982) In a Different Voice: Psychological Theory and
http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679
Women’s Development. Cambridge, MA: Harvard University Press.
Glancy, Dorothy J. (1979) The Invention of the Right to Privacy,
Arizona Law Review 21(1): 1–39.
Greene, Joshua D. (2014) Beyond Point-and-Shoot Morality: Why
Cognitive (Neuro)Science Matters for Ethics, Ethics 124(July 2014):
695–726.
Greenleaf, Graham (2011) Asia-Pacific Data Privacy: 2011, Year of
Revolution? UNSW Law Research Paper No. 2011-29,
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1914212.
Grodzinsky, Frances S., Keith Miller, and Marty J. Wolf (2008) The
Ethics of Designing Artificial Agents, Ethics and Information
Technology 10(2–3): 115–21. DOI: 10.1007/s10676-008-9163-9.
Guess, Andrew, Brendan Nyhan, and Jason Reifler (2018) Selective
Exposure to Misinformation: Evidence from the Consumption of
Fake News during the 2016 US Presidential Campaign, European
Research Council, www-personal.umich.edu/~bnyhan/fake-news-
2016 .
Haddon, Leslie, and Gitte Stald (2009) Cultures of Research and
Policy in Europe, pp. 55–70 in S. Livingston and L. Haddon (eds.),
Kids Online: Opportunities and Risks for Children. Bristol: Policy
Press.
Hallnäs, Lars, and Johan Redström (2001) Slow Technology –
Designing for Reflection, Personal and Ubiquitous Computing 5
(2001): 201–12.
Hansen, Mette Halskov, and Rune Svarverud (eds.) (2010) The Rise of
the Individual in Modern Chinese Society. Copenhagen: Nordic
Institute of Asian Studies.
Hård af Segerstad, Ylva, and Dick Kasperowski (2015) Opportunities
and Challenges of Studying Closed Communities Online: Digital
Methods and Ethical Considerations. University of Gøteborg,
Sweden.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1914212
http://www-personal.umich.edu/~bnyhan/fake-news-2016
Hargittai, Eszter, and Alice Marwick (2016) “What Can I Really Do?”
Explaining the Privacy Paradox with Online Apathy, International
Journal of Communication 10(2016): 3737–57.
Henaff, Marcel, and Tracy Strong (2001) Public Space and
Democracy. Minneapolis: University of Minnesota Press.
Heyd, David (2016) Supererogation, in Edward N. Zalta (ed.), The
Stanford Encyclopedia of Philosophy (Spring 2016 Edition),
https://plato.stanford.edu/archives/spr2016/entries/supererogation
Hick, Darren, and Reinold Schmücker (eds.) (2016) The Aesthetics
and Ethics of Copying. London: Bloomsbury.
Hildebrandt, Mireille (2015) Smart Technologies and the End(s) of
Law: Novel Entanglements of Law and Technology. Northampton,
MA: Edgar Elgar.
Hintz, Arne, Lina Dencik, and Karin Wahl-Jorgensen (2018) Digital
Citizenship in a Datafied Society. Oxford: Polity.
Hoffman, Samantha (2017) Programming China: The Communist
Party’s Autonomic Approach to Managing State Security, Merics
China Monitor 44, www.merics.org/en/microsite/china-
monitor/programming-china.
Hoffmann, E. T. A. ([1816] 2003) Der Sandmann [The Sand Man], ed.
Rudolf Drux. Ditzingen, Germany: Reclam.
Holpuch, Amanda (2016) Tim Cook Says Apple’s Refusal to Unlock
iPhone for FBI is a “Civil Liberties” Issue, The Guardian, February
22, www.theguardian.com/technology/2016/feb/22/tim-cook-
apple-refusal-unlock-iphone-fbi-civil-liberties.
Holst, Cathrine (2017) Hva er feminisme? [What is feminism?] (2nd
edn.). Oslo: Universitetsforlaget.
Hongladarom, Soraj (2007) Analysis and Justification of Privacy from
a Buddhist Perspective, pp. 108–22 in Soraj Hongladarom and
Charles Ess (eds.), Information Technology Ethics: Cultural
Perspectives. Hershey, PA: Idea Group Reference.
https://plato.stanford.edu/archives/spr2016/entries/supererogation
http://www.merics.org/en/microsite/china-monitor/programming-china
http://www.theguardian.com/technology/2016/feb/22/tim-cook-apple-refusal-unlock-iphone-fbi-civil-liberties
(2017) Internet Research Ethics in a Non-Western Context, pp. 151–63
in Michael Zimmer and Katerina Kinder-Kurlanda (eds.), Internet
Research Ethics for the Social Age: New Challenges, Cases, and
Contexts. Berlin: Peter Lang.
Hornung, Peter Michael (2010) ARoS tager Eros på tapetet [ARoS –
the Aarhus Art Museum – Puts Eros on the Agenda], Politiken,
March 28. [The online version of this story, including the
uncensored image of the Jeff Koons painting, is no longer available.
The curious can easily find the image online, though you will have
to turn off any filtering software, including Google’s, for images. US
readers should be advised that the image would be X-rated, though
of a more softcore sort.]
Hovde, Astrid Linnea Løland. 2016. Grief 2.0: Grieving in an Online
World. MA thesis, Department of Media and Communication,
University of Oslo,
www.duo.uio.no/bitstream/handle/10852/52544/Hovde-Master-
2016 ?sequence=5.
Howard, Phillip N., Aiden Duffy, Deen Freelon, Muzammil Hussain,
Will Mari, and Marwa Mazaid (2011) Opening Closed Regimes:
What Was the Role of Social Media During the Arab Spring?
Project on Information Technology and Political Islam, Research
Memo 2011.1. Seattle: University of Washington,
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/117568/2011_Howard-
Duffy-Freelon-Hussain-Mari-Mazaid_PITPI ?
sequence=1&isAllowed=y.
Hsu, Shang Hwa, Ming-Hui Wen, and Muh-Cherng Wu (2009)
Exploring User Experiences as Predictors of MMORPG Addiction,
Computers & Education 53: 990–9.
Hughes, James (2012) Compassionate AI and Selfless Robots: A
Buddhist Approach, pp. 69–83 in Patrick Lin, Keith Abney, and
George A. Bekey (eds.), Robot Ethics: The Ethical and Social
Implications of Robotics. Cambridge, MA: MIT Press.
Huizinga, Johan ([1938] 1955) Homo Ludens: A Study of the Play-
http://www.duo.uio.no/bitstream/handle/10852/52544/Hovde-Master-2016 ?sequence=5
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/117568/2011_Howard-Duffy-Freelon-Hussain-Mari-Mazaid_PITPI ?sequence=1&isAllowed=y
Element in Culture. Boston: Beacon Press.
Hursthouse, Rosalind (1999) On Virtue Ethics. Oxford University
Press.
IEEE (2019) IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems. Ethically Aligned Design: A Vision for
Prioritizing Human Well-being with Autonomous and Intelligent
Systems, First Edition, https://standards.ieee.org/content/ieee-
standards/en/industry-connections/ec/autonomous-systems.html.
Internet World Stats (2018) www.internetworldstats.com/stats.htm.
Jefferson, Thomas ([1776] 1984) A Declaration by the Representatives
of the United States of America, in General Congress Assembled,
pp. 19–24 in Merrill D. Peterson (ed.), Thomas Jefferson: Writings.
New York: Library of America.
Jenkins, Henry (2006) Convergence Culture: Where Old and New
Media Collide. New York University Press.
Jensen, Jakob Linaa (2007) The Internet Omnopticon, pp. 351–80 in
H. Bang and A. Esmark (eds.), New Publics with/out Democracy.
Frederiksberg, Denmark: Samfundslitteratur/NORDICOM.
Jin, Dal Yong (2015) Digital Platforms, Imperialism and Political
Culture. New York: Routledge.
Johnson, Deborah (2001) Computer Ethics (3rd edn.). Upper Saddle
River, NJ: Prentice-Hall.
Kang, Cecilia (2019) F. T. C. [Federal Trade Commission] Approves
Facebook Fine of About $5 Billion. New York Times, July 12,
www.nytimes.com/2019/07/12/technology/facebook-ftc-fine.html?
searchResultPosition=.
Kant, Immanuel ([1785] 1959) Foundations of the Metaphysics of
Morals, trans. Lewis White Beck. Indianapolis: Bobbs-Merrill.
([1788] 1956) Critique of Practical Reason, trans. Lewis White Beck.
Indianapolis: Bobbs-Merrill.
https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html
http://www.internetworldstats.com/stats.htm
http://www.nytimes.com/2019/07/12/technology/facebook-ftc-fine.html?searchResultPosition=
Khandelwal, Swati (2018) FBI Seizes Control of a Massive Botnet that
Infected over 500,000 Routers, The Hacker News (May 23),
http://bit.ly/31qZXBs.
King, Martin Luther, Jr. ([1963] 1964) Letter from the Birmingham
Jail, pp. 77–100 in Martin Luther King, Jr. (ed.), Why We Can’t
Wait. New York: Mentor.
Kitiyadisai, Krisana (2005) Privacy Rights and Protection: Foreign
Values in Modern Thai Context, Ethics and Information
Technology 7(1): 17–26.
Kobie, Nicole (2019) The Complicated Truth about China’s Social
Credit System, Wired, January 21, www.wired.co.uk/article/china-
social-credit-system-explained.
Kondor, Zsuzsanna (2009) Communication and the Metaphysics of
Practice: Sellarsian Ethics Revisited, pp. 179–87 in Kristóf Nyírí
(ed.), Engagement and Exposure: Mobile Communication and the
Ethics of Social Networking. Vienna: Passagen.
Kostka, Genia (2018) China’s Social Credit Systems and Public
Opinion: Explaining High Levels of Approval, July 23, SSRN:
https://ssrn.com/abstract=3215138 or DOI: 10.2139/ssrn.3215138.
Kramera, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock
(2014) Experimental Evidence of Massive-Scale Emotional
Contagion through Social Networks, Proceedings of the National
Academy of Sciences (June 17, 2014) 111(24): 8788–90; published
ahead of print June 2, 2014, DOI: 10.1073/pnas.1320040111.
Lagerkvist, Amanda (2016) Existential Media: Toward a Theorization
of Digital Thrownness, new media & society 19(1: 2017): 96–110;
online first, June 13, 2016.
(2018) Digital Existence: Ontology, Ethics and Transcendence in
Digital Culture. London: Routledge.
Lagerkvist, Amanda, and Yvonne Andersson (2017) The Grand
Interruption: Death Online and Mediated Lifelines of Shared
http://bit.ly/31qZXBs
http://www.wired.co.uk/article/china-social-credit-system-explained
https://ssrn.com/abstract=3215138
Vulnerability, Feminist Media Studies 17(4): 550–64. DOI:
10.1080/14680777.2017.1326554.
Lange, Patricia G. (2007) Publicly Private and Privately Public: Social
Networking on YouTube, Journal of Computer-Mediated
Communication 13(1): 361–80. DOI: 10.1111/j.1083-
6101.2007.00400.x.
Latonero, Mark, and Aram Sinnreich (2014) The Hidden Demography
of New Media Ethics, Information, Communication & Society
17(5): 572–93. DOI: /10.1080/1369118X.2013.808364.
Levin, Sam, Julie Carrie Wong, and Luke Harding (2016) Facebook
Backs Down from “Napalm Girl” Censorship and Reinstates Photo,
The Guardian, September 9,
www.theguardian.com/technology/2016/sep/09/facebook-
reinstates-napalmgirl-photo.
Levinas, Emmanuel (1987) Time and the Other and Additional
Essays, trans. Richard A. Cohen. Pittsburgh: Duquesne University
Press.
Levy, David (2007) Love and Sex with Robots: The Evolution of
Human–Robot Relationships. New York: Harper Collins.
Lewis, Paul (2017) “Our Minds Can Be Hijacked”: The Tech Insiders
Who Fear a Smartphone Dystopia, The Guardian, October 6,
www.theguardian.com/technology/2017/oct/05/smartphone-
addiction-silicon-valley-dystopia.
Lewis, Patricia, Heather Williams, Benoit Pelopidas, and Sasan
Aghlani (2014) Too Close for Comfort: Cases of Near Nuclear Use
and Options for Policy, Chatham House Report,
www.chathamhouse.org/sites/default/files/field/field_document/20140428TooCloseforComfortNuclearUseLewisWilliamsPelopidasAghlani
Lim, Merlyna (2006) Democracy, Conspiracy, Pornography: The
Internet and Political Activism in Indonesia. Lecture at “IR 7.0:
Internet Convergences Conference,” Brisbane, September 28.
Lim, Merlyna (2018) Roots, Routes, and Routers: Communications
http://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalmgirl-photo
http://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia
http://www.chathamhouse.org/sites/default/files/field/field_document/20140428TooCloseforComfortNuclearUseLewisWilliamsPelopidasAghlani
and Media of Contemporary Social Movements, Journalism &
Communication Monographs 20(2): 92–136. DOI:
10.1177/1522637918770419.
Lindgren, Simon (2017) Digital Media and Society: Theories, Topics
and Tools. London: Sage.
Ling, Rich (2017) The Social Dynamics of Mobile Group Messaging,
Annals of the International Communication Association 41(3–4):
242–9. DOI: 10.1080/23808985.2017.1374199.
Liptak, Adam (2011) Justices Reject Ban on Violent Video Games for
Children, New York Times, June 27,
www.nytimes.com/2011/06/28/us/28scotus.html?pagewanted=all.
Livingstone, Sonia (2011a) Internet, Children, and Youth, pp. 348–68
in Mia Consalvo and Charles Ess (eds.), The Handbook of Internet
Studies. Oxford: Wiley-Blackwell.
(2011b) Regulating the Internet in the Interests of Children: Emerging
European and International Approaches, pp. 505–24 in Robin
Mansell and Marc Raboy (eds.), The Handbook of Global Media
and Communication Policy. Chichester, West Sussex: Wiley-
Blackwell.
Livingstone, Sonia, Leslie Haddon, Anke Görzig, and Kjartan Ólafsson
(2011) EU Kids Online: Final Report 2011,
http://eprints.lse.ac.uk/45490.
Lomborg, Stine (2012) Negotiating Privacy through Phatic
Communication: A Case Study of the Blogging Self, Philosophy and
Technology 25: 415–34. DOI: 10.1007/s13347-011-0018-7.
Lomborg, Stine, and Charles Ess (2012) “Keeping the Line Open and
Warm”: An Activist Danish Church and its Presence on Facebook,
pp. 169–90 in Pauline Cheong, Judith N. Martin, and Leah P.
Macfadyen (eds.), New Media and Intercultural Communication:
Identity, Community and Politics. Oxford: Peter Lang.
Lü, Yao-Hui (2005) Privacy and Data Privacy Issues in Contemporary
http://www.nytimes.com/2011/06/28/us/28scotus.html?pagewanted=all
http://eprints.lse.ac.uk/45490
China, Ethics and Information Technology 7(1): 7–15.
Lüders, Marika (2011) Why and How Online Sociability Became Part
and Parcel of Teenage Life, pp. 456–73 in Mia Consalvo and
Charles Ess (eds.), The Handbook of Internet Studies. Oxford:
Wiley-Blackwell.
Lukesch, Helmut (2012) Computerspiele und “Spielsucht” [Computer
Games and Pathological Gambling], pp. 147–68 in Martin K. W.
Schweer (ed.), Medien in unserer Gesellschaft: Chancen und
Risiken [Media in Our Society: Opportunities and Risks]. Frankfurt
am Main: Lang.
Martens, H., and T. Brown (2018) Relational Autonomy and the
Quantified Relationship. American Journal of Bioethics 18(2): 39-
40.
Martin, Daniel (2012) School Groping Surge Blamed on Internet Porn:
Third of Sixth-Form Girls Have Been Abused by Classmates, Daily
Mail, November 13, www.dailymail.co.uk/news/article-
2232582/School-groping-surge-blamed-net-porn-Third-sixth-
form-girls-abusedclassmates.html.
Marwick, Alice, and danah boyd (2014) Networked Privacy: How
Teenagers Negotiate Context in Social Media, new media & society,
16(7) 1051–67.
Marx, Karl, and Friedrich Engels ([1846] 1976) The German Ideology.
In Karl Marx and Friedrich Engels: Collected Works, vol. V. New
York: International Publishers and Progress Publishers.
Massumi, Brian (2002) Parables for the Virtual: Movement, Affect,
Sensation. Durham, NC, and London: Duke University Press.
Matich, Margaret, Rachel Ashman, and Elizabeth Parsons (2018):
#freethenipple – Digital Activism and Embodiment in the
Contemporary Feminist Movement, Consumption Markets &
Culture. DOI: 10.1080/10253866.2018.1512240.
McKenna, Michael, and D. Justin Coates (2018) Compatibilism, in
http://www.dailymail.co.uk/news/article-2232582/School-groping-surge-blamed-net-porn-Third-sixth-form-girls-abusedclassmates.html
Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy
(Winter 2018 Edition),
https://plato.stanford.edu/archives/win2018/entries/compatibilism
Mendez, Mario F., Eric Anderson, and Jill S. Shapira (2005) An
Investigation of Moral Judgment in Frontotemporal Dementia,
Cognitive and Behavioral Neurology 18: 193–7.
Menkiti, Ifeanyi A. (1979) Person and Community in African
Traditional Thought, pp. 157–68 in Richard A. Wright (ed.),
African Philosophy. New York University Press.
Meyrowitz, Joshua (1985) No Sense of Place: The Impact of Electronic
Media on Social Behavior. Oxford University Press.
Midgley, Mary ([1981] 1996) Trying Out One’s New Sword, pp. 116–19
in John Arthur (ed.), Morality and Moral Controversies (4th edn.).
Upper Saddle River, NJ: Simon & Schuster.
Moor, James H. (1997) Towards a Theory of Privacy in the
Information Age, ACM SIGCAS Computers and Society 27: 27–32.
(2000) Toward a Theory of Privacy in the Information Age, pp. 200–
12 in R. M. Baird, R. Ramsower, and S. E. Rosenbaum (eds.),
Cyberethics: Social & Moral Issues in the Computer Age. Amherst,
NY: Prometheus Books.
Moseley, Raam (2019) The Pirate Bay is Down – Top 7 Pirate Bay
(TPB) Alternatives, Cyberguards, May 26,
https://cyberguards.com/new-pirate-bay-is-down-2019s-top-7-
best-alternatives.
Moyer, Melinda Wenner (2018) Yes, Violent Video Games Trigger
Aggression, but Debate Lingers, Scientific American, October 2,
www.scientificamerican.com/article/yes-violent-video-games-
trigger-aggression-but-debate-lingers.
Mullins, Phil (1996) Sacred Text in the Sea of Texts: The Bible in
North American Electronic Culture, pp. 271–302 in Charles Ess
(ed.), Philosophical Perspectives on Computer-Mediated
https://plato.stanford.edu/archives/win2018/entries/compatibilism
https://cyberguards.com/new-pirate-bay-is-down-2019s-top-7-best-alternatives
http://www.scientificamerican.com/article/yes-violent-video-games-trigger-aggression-but-debate-lingers
Communication. Albany: State University of New York Press.
Musgrave, Frank W. (2006) The Economics of U.S. Health Care
Policy: The Role of Market Forces. London: Routledge.
Myskja, Bjørn (2008) The Categorical Imperative and the Ethics of
Trust, Ethics and Information Technology 10(4): 213–20.
Nakada, Makoto, and Takanori Tamura (2005) Japanese Conceptions
of Privacy: An Intercultural Perspective, Ethics and Information
Technology 7(1): 27–36.
Nash, Victoria, Joanna R. Adler, Miranda A. H. Horvath et al. (2015)
Identifying the Routes by which Children View Pornography
Online: Implications for Future Policy-Makers Seeking to Limit
Viewing, http://eprints.lse.ac.uk/65450.
NESH ([Norwegian] National Committee for Research Ethics in the
Social Sciences and the Humanities) (2006) Forskningsetiske
retningslinjer for samfunnsvitenskap, humaniora, juss og teologi
[Research Ethics Guidelines for Social Sciences, the Humanities,
Law and Theology],
www.etikkom.no/globalassets/documents/publikasjoner-som-
pdf/forskningsetiske-retningslinjer-for-samfunnsvitenskap-
humaniora-juss-og-teologi-2006 .
Nissenbaum, Helen (2010) Privacy in Context: Technology, Policy,
and the Integrity of Social Life. Palo Alto, CA: Stanford University
Press.
Noddings, Nel (1984) Caring: A Feminine Approach to Ethics and
Moral Education. Berkeley: University of California Press.
Nørskov, Marko (ed.) (2016) Social Robots: Boundaries, Potential,
Challenges. Farnham, Surrey: Ashgate.
[Norwegian] Outdoor Recreation Act [Lov om friluftslivet] (1957),
https://lovdata.no/dokument/NL/lov/1957-06-28-16.
Nussbaum, Martha C. ([1986] 2001) The Fragility of Goodness: Luck
and Ethics in Greek Tragedy and Philosophy. New York:
http://eprints.lse.ac.uk/65450
http://www.etikkom.no/globalassets/documents/publikasjoner-som-pdf/forskningsetiske-retningslinjer-for-samfunnsvitenskap-humaniora-juss-og-teologi-2006
https://lovdata.no/dokument/NL/lov/1957-06-28-16
Cambridge University Press.
Øian, Hogne, Peter Fredman, Klas Sandell, Anna Dóra Sæþórsdóttir,
Liisa Tyrväinen, and Frank Søndergaard Jensen (2018) Tourism,
Nature and Sustainability: A Review of Policy Instruments in the
Nordic Countries. Nordic Council of Ministers / Denmark,
https://norden.diva-
portal.org/smash/get/diva2:1209894/FULLTEXT01 .
Ong, Walter (1988) Orality and Literacy: The Technologizing of the
Word. London: Routledge.
Ortega y Gasset, José (2002) Toward a Philosophy of History, trans.
Helene Weyl. Urbana and Chicago: University of Illinois Press.
Paasonen, Susanna (2010) Labors of Love: Netporn, Web 2.0 and the
Meanings of Amateurism, new media & society 12(8): 1297–312.
(2011) Online Pornography: Ubiquitous and Effaced, pp. 424–39 in
Mia Consalvo and Charles Ess (eds.), The Handbook of Internet
Studies. Oxford: Wiley-Blackwell.
Paasonen, Susanna, Kaarina Nikunen, and Laura Saarenmaa
Paasonen (eds.) (2007) Pornification: Sex and Sexuality in Media
Culture. Oxford and New York: Berg.
Pane, Lisa Marie (2018) After Mass Shootings, NRA Pins Blame on
Familiar List, AP News, May 24,
https://apnews.com/d9e2f6f20c6c48869109c5f4a5d6d348.
Papacharissi, Zisi (2010) A Private Sphere: Democracy in a Digital
Age. Cambridge, and Malden, MA: Polity.
(ed.) (2018) A Networked Self and Birth, Life, Death. London:
Routledge
Pariser, Eli (2011) The Filter Bubble. London: Viking.
Paterson, Barbara (2007) We Cannot Eat Data: The Need for
Computer Ethics to Address the Cultural and Ecological Impacts of
Computing, pp. 153–68 in Soraj Hongladarom and Charles Ess
https://norden.diva-portal.org/smash/get/diva2:1209894/FULLTEXT01
https://apnews.com/d9e2f6f20c6c48869109c5f4a5d6d348
(eds.), Information Technology Ethics: Cultural Perspectives.
Hershey, PA: Idea Group Reference.
Patrignani, Norberto, and Diane Whitehouse (2018) Slow Tech and
ICT: A Responsible, Sustainable and Ethical Approach. Cham,
Switzerland: Palgrave Macmillan.
Perlroth, Nicole, Amie Tsang, and Adam Satariano (2018) Marriott
Hacking Exposes Data of Up to 500 Million Guests, New York
Times, November 30,
www.nytimes.com/2018/11/30/business/marriott-data-
breach.html.
Pew Research Center (2015) America’s Changing Religious Landscape,
www.pewforum.org/2015/05/12/americas-changing-religious-
landscape.
Plato (1892) The Apology, pp. 109–35 in The Dialogues of Plato (3rd
edn.), trans. B. Jowett, Vol. II. Oxford University Press.
(1991) The Republic, trans. Allan Bloom, with notes, an interpretive
essay, and a new introduction. New York: Basic Books.
Postman, Neil (1985) Amusing Ourselves to Death: Public Discourse
in the Age of Show Business. New York: Penguin.
Rachels, James. 1975. Why Privacy is Important, Philosophy and
Public Affairs 4(4): 323–33.
Ramose, Mogobe B. (2002) Globalization and Ubuntu, pp. 626–50 in
Pieter Coetzee and Abraham Roux (eds.), Philosophy from Africa:
A Text with Readings (2nd edn.). Oxford University Press.
Reading, Anna (2009) The Playful Panopticon? Ethics and the Coded
Self in Social Networking Sites, pp. 93–101 in Kristóf Nyíri (ed.),
Engagement and Exposure: Mobile Communication and the Ethics
of Social Networking. Vienna: Passagen.
Rheingold, Howard (1993) The Virtual Community: Homesteading on
the Electronic Frontier. New York: HarperCollins.
http://www.nytimes.com/2018/11/30/business/marriott-data-breach.html
http://www.pewforum.org/2015/05/12/americas-changing-religious-landscape
Richardson, Kathleen (2015) The Asymmetrical “Relationship”:
Parallels Between Prostitution and the Development of Sex Robots.
SIGCAS Computers & Society 45(3): 290–3.
Robinson, Jessica (2016) “Statistical Appendix” to Trine Syvertsen,
Gunn Enli, Ole J. Mjøs, and Hallvard Moe (eds.), The Media
Welfare State: Nordic Media in the Digital Era. Ann Arbor:
University of Michigan Press, 2014. Updated edn. available: doi:
10.3998/nmw.12367206.0001.001.
Rohner, Ronald P. (1984) Toward a Conception of Culture for Cross-
Cultural Psychology, Journal of Cross-Cultural Psychology 15(2):
111–38. DOI: 10.1177/0022002184015002002.
Romm, Tony (2019) France Fines Google Nearly $57 Million for First
Major Violation of New European Privacy Regime, Washington
Post, January 21, https://wapo.st/2KpLuzX.
Roose, Kevin (2019) Do Not Disturb: How I Ditched My Phone and
Unbroke My Brain, New York Times, February 23,
www.nytimes.com/2019/02/23/business/cell-phone-
addiction.html.
Rosemont, Henry, Jr. (2006) Individual Rights vs. Social Justice: A
Confucian Meditation. Lecture given at Drury University,
Springfield, MO, April 6.
Rouvroy, Antoinette (2008) Privacy, Data Protection, and the
Unprecedented Challenges of Ambient Intelligence, Studies in
Ethics, Law, and Technology, Berkeley Electronic Press, SSRN:
https://ssrn.com/abstract=1013984.
Ruddick, Sara (1975) Better Sex, pp. 83–104 in R. Baker and F.
Elliston (eds.), Philosophy and Sex. Amherst, NY: Prometheus
Books.
(1989) Maternal Thinking: Towards a Politics of Peace. Boston:
Beacon Press.
Rúdólfsdóttir, Annadís G., and Ásta Jóhannsdóttir (2018) Fuck
https://wapo.st/2KpLuzX
http://www.nytimes.com/2019/02/23/business/cell-phone-addiction.html
https://ssrn.com/abstract=1013984
Patriarchy! An Analysis of Digital Mainstream Media Discussion of
the #Freethenipple activities in Iceland in March 2015, Feminism &
Psychology 28(1): 133–51.
Rusbridger, Alan, and Ewen MacAskill (2014) Edward Snowden
Interview – the Edited Transcript, The Guardian, July 18,
www.theguardian.com/world/2014/jul/18/-sp-edward-snowden-
nsa-whistleblower-interview-transcript.
Sabra, Jakob Borrits (2017) “I Hate When They Do That!” Netiquette
in Mourning and Memorialization among Danish Facebook Users,
Journal of Broadcasting & Electronic Media 61(1): 24–40. DOI:
10.1080/08838151.2016.1273931.
Schwartz, Shalom H. (2015) Basic Individual Values: Sources and
Consequences, pp. 63–84 in D. Sander and T. Brosch (eds.),
Handbook of Value. Oxford University Press.
Schwartz, Margaret (2019) Thrownness, Vulnerability, Care: A
Feminist Ontology for the Digital Age, pp. 81–99 in Amanda
Lagerkvist (ed.), Digital Existence: Ontology, Ethics and
Transcendence in Digital Culture. London: Routledge.
Shahbaz, Adrian (2018) Freedom on the Net 2018: The Rise of Digital
Authoritarianism: Fake News, Data Collection, and the Challenge to
Democracy, https://freedomhouse.org/report/freedom-
net/freedom-net-2018/rise-digital-authoritarianism.
Shelley, Mary Wollstonecraft ([1818] 1933) Frankenstein: or, a
Modern Prometheus. New York: Dutton.
Shutte, Augustine (1993) Philosophy for Africa. University of Cape
Town Press.
Sicart, Miguel (2009) The Ethics of Computer Games. Cambridge,
MA: MIT Press.
Sigot, Nathalie (2002) Jevons’s Debt to Bentham: Mathematical
Economy, Morals and Psychology, The Manchester School 70(2):
262–78.
http://www.theguardian.com/world/2014/jul/18/-sp-edward-snowden-nsa-whistleblower-interview-transcript
https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism
Simon, Judith (2015) Distributed Epistemic Responsibility in a
Hyperconnected Era, pp. 145–59 in L. Floridi (ed.), The Onlife
Manifesto: Being Human in a Hyperconnected Era. London:
Springer Open.
Sinnott-Armstrong, Walter (2015) Consequentialism, in Edward N.
Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2015
Edition),
https://plato.stanford.edu/archives/win2015/entries/consequentialism
Smith, Aaron (2017) Americans and Cybersecurity. Pew Research
Center (January 26),
www.pewinternet.org/2017/01/26/americans-and-cybersecurity.
Solon, Olivia (2017) Tech’s Terrible Year: How the World Turned on
Silicon Valley in 2017, The Guardian, December 23,
www.theguardian.com/technology/2017/dec/22/tech-year-in-
review-2017.
Spencer, Michael K. (2019) Germany Anti-Trust Slaps Facebook with
Data Gathering Limit, Medium, February 7,
https://medium.com/futuresin/germany-anti-trust-slaps-
facebook-with-data-gathering-limit-208203293c6e.
Spiekermann, Sarah (2016) Ethical IT Innovation: A Value-based
System Design Approach. New York: Taylor & Francis.
Stahl, Bernd Carsten (2004) Responsible Management of Information
Systems. Hershey, PA: Idea Group.
Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt
(2016) The Ethics of Computing: A Survey of the Computing-
Oriented Literature, ACM Computing Surveys 48(4: February):
Article 55: 1–38. DOI: 10.1145/2871196.
Staksrud, Elisabeth, and Kjartan Ólafsson (2019) Tilgang, bruk, risiko
og muligheter. Norske barn på Internett. Resultater fra EU Kids
Online undersøkelsen i Norge 2018 [Access, Use, Risks, and
Potentials: Norwegian Children Online – Results of the EU Kids
Online Research], EU Kids Online og Institutt for medier og
https://plato.stanford.edu/archives/win2015/entries/consequentialism
http://www.pewinternet.org/2017/01/26/americans-and-cybersecurity
http://www.theguardian.com/technology/2017/dec/22/tech-year-in-review-2017
https://medium.com/futuresin/germany-anti-trust-slaps-facebook-with-data-gathering-limit-208203293c6e
kommunikasjon, Universitetet i Oslo,
www.hf.uio.no/imk/forskning/prosjekter/eu-kids-iv/rapporter.
Stromer-Galley, Jennifer, and Amber Wichowski (2011) Political
Discussion Online, pp. 168–87 in Mia Consalvo and Charles Ess
(eds.), The Handbook of Internet Studies. Oxford: Wiley-Blackwell.
Sui, Suli (2011) The Law and Regulation on Privacy in China. Paper
presented at the “Rising Pan European and International
Awareness of Biometrics and Security Ethics (RISE)” Conference,
Beijing, October 20–21.
Sullins, John (2012) Robots, Love, and Sex: The Ethics of Building a
Love Machine, IEEE Transactions on Affective Computing 3(4):
398–409.
Sunstein, Cass (2001) republic.com. Princeton University Press.
Syvertsen, Trine, and Gunn Enli (2019) Digital Detox: Media
Resistance and the Promise of Authenticity, Convergence: The
International Journal of Research into New Media Technologies,
1–15. DOI: 10.1177/1354856519847325.
Tang, Raymond (2002) Approaches to Privacy – The Hong Kong
Experience,
www.pco.org.hk/english/infocentre/speech_20020222.html
(electronic document no longer available).
Tavani, Herman T. (2013) Ethics and Technology: Ethical Issues in an
Age of Information and Communication Technology (4th edn.).
Hoboken, NJ: Wiley.
Taylor, Linnet, Luciano Floridi, and Bart van der Sloot (eds.) (2017)
Group Privacy: New Challenges of Data Technologies. Dordrecht:
Springer.
Thomson, Judith Jarvis (1971) A Defense of Abortion, Philosophy and
Public Affairs 1(1): 47–66.
Thorn, Clarisse, and Julian Dibbell (eds.) (2012) Violation: Rape in
Gaming. CreateSpace Independent Publishing Platform.
http://www.hf.uio.no/imk/forskning/prosjekter/eu-kids-iv/rapporter
http://republic.com
http://www.pco.org.hk/english/infocentre/speech_20020222.html
Thorseth, May (2006) Worldwide Deliberation and Public Use of
Reason Online, Ethics and Information Technology 8(4): 243–52.
(2011) Virtuality and Trust in Broadened Thinking Online, pp. 162–73
in C. Ess and M. Thorseth (eds.), Trust and Virtual Worlds:
Contemporary Perspectives. New York: Peter Lang.
Time (1969) Denmark: Pornography: What is Permitted is Boring,
June 6,
http://content.time.com/time/magazine/article/0,9171,941672,00.html
Tong, Rosemarie, and Nancy Williams (2018) Feminist Ethics, in
Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy,
https://plato.stanford.edu/archives/win2018/entries/feminism-
ethics.
Turkle, Sherry (2011) Alone Together: Why We Expect More from
Technology and Less from Each Other. New York: Basic Books.
Vallor, Shannon (2009) Social Networking Technology and the
Virtues, Ethics and Information Technology 12(2): 157–70.
(2011) Flourishing on Facebook: Virtue Friendship & New Social
Media, Ethics and Information Technology 14(3): 185–99.
(2015) Moral Deskilling and Upskilling in a New Machine Age:
Reflections on the Ambiguous Future of Character. Philosophy of
Technology 28: 107–24.
(2016a) Social Networking and Ethics, in Edward N. Zalta (ed.), The
Stanford Encyclopedia of Philosophy (Winter 2016 Edition),
https://plato.stanford.edu/archives/win2016/entries/ethics-social-
networking.
(2016b) Technology and the Virtues: A Philosophical Guide to a
Future Worth Wanting. Cambridge, MA: MIT Press.
van der Velden, Maja (2014) Re-Politicising Participatory Design:
What Can We Learn from Fairphone? Paper presented at the
“Ninth International Conference on Culture and Technology and
Communication” (CaTaC),
http://content.time.com/time/magazine/article/0,9171,941672,00.html
https://plato.stanford.edu/archives/win2018/entries/feminism-ethics
https://plato.stanford.edu/archives/win2016/entries/ethics-social-networking
www.duo.uio.no/bitstream/handle/10852/42039/2/maja_van_der_velden_fairphone2
van Wynsberghe, Aimee (2016) Service Robots, Care Ethics and
Design, Ethics and Information Technology 18(4): 311–21.
Verbeek, Peter-Paul (2017) Existentializing Technology: Vulnerability
in a Digital Age. Closing address to “Precarious Media Life,”
Sigtuna, Sweden, October 30 – November 1.
Verrier, Antonin (2007) Porte de Choisy,
www.festivalpocketfilms.fr/spip.php?article648.
Vignoles, V. L., E. Owe, M. Becker et al. (2016) Beyond the “East–
West” Dichotomy: Global Variation in Cultural Models of Selfhood,
Journal of Experimental Psychology: General 145(8): 966–1000.
DOI: 10.1037/xge0000175.
Wakabayashi, Daisuke (2018) California Passes Sweeping Law to
Protect Online Privacy, New York Times, June 28,
www.nytimes.com/2018/06/28/technology/california-online-
privacy-law.html.
Wall, John (2003) Phronesis, Poetics, and Moral Creativity, Ethical
Theory and Moral Practice 6(3: Sept.): 317–41.
Wallen, Jack (2018) How to Use Ublock Origin and Privacy Badger to
Prevent Browser Tracking in Firefox, TechRepublic, October 24,
www.techrepublic.com/article/how-to-use-ublock-origin-and-
privacy-badger-to-prevent-browser-tracking-in-firefox.
Wang, Tom (2016) Designing Confucian Conscience into Social
Networks, Zygon 51 (2): 239–56.
Warburton, Nigel (2009) Free Speech: A Very Short Introduction.
Oxford University Press.
Ward, L. Monique (2016) Media and Sexualization: State of Empirical
Research, 1995–2015, Journal of Sex Research 53(4–5): 560–77.
DOI: 10.1080/00224499.2016.1142496.
Ward, Stephen J. A. (2011) Ethics and the Media: An Introduction.
http://www.duo.uio.no/bitstream/handle/10852/42039/2/maja_van_der_velden_fairphone2
http://www.festivalpocketfilms.fr/spip.php?article648
http://www.nytimes.com/2018/06/28/technology/california-online-privacy-law.html
How to use Ublock Origin and Privacy Badger to prevent browser tracking in Firefox
Cambridge University Press.
Warren, Karen J. (1990) The Power and the Promise of Ecological
Feminism, Environmental Ethics 12(2): 123–46.
Warren, Lydia, and Meghan Keneally (2012) The Internet Vigilantes:
Anonymous Hackers’ Group Outs Man, 32, Who Drove Girl, 15, to
Suicide by Spreading Topless Photos of Her, Daily Mail, October
16, www.dailymail.co.uk/news/article-2218532/Amanda-Todd-
Anonymous-names-man-drove-teen-kill-spreading-nude-
pictures.html#ixzz2CC1ihYXx.
Warren, Samuel, and Louis Brandeis (1890) The Right to Privacy,
Harvard Law Review 4 (5: Dec. 15): 193–220.
Webster, Andrew (2018) Why Competitive Gaming is Starting to Look
a Lot Like Professional Sports, The Verge, July 27,
www.theverge.com/2018/7/27/17616532/overwatch-league-of-
legends-nba-nfl-esports.
Weiser, Mark, and John Seely Brown (1996) The Coming Age of Calm
Technology, PowerGrid Journal, 1.01 (July),
https://pdfs.semanticscholar.org/23a6/cdc72fa2a59d62ea94aa68cfe484982cf2b8
Westlund, Andrea (2009) Rethinking Relational Autonomy, Hypatia
24(4: Fall): 26–49.
Weston, Anthony (2018) A Rulebook for Arguments (5th edn.).
Indianapolis: Hackett.
Wheeler, Deborah (2006) Gender Sensitivity and the Drive for IT:
Lessons from the NetCorps Jordan Project, Ethics and Information
Technology 8(3): 131–42.
White, Aoife (2008) IP Addresses Are Personal Data, E.U. Regulator
Says, Washington Post, January 22, p. D1,
www.washingtonpost.com/wp-
dyn/content/article/2008/01/21/AR2008012101340.html.
Whitman, James Q. (2004) The Two Western Cultures of Privacy:
Dignity versus Liberty, Faculty Scholarship Series 113: 1151–221,
http://www.dailymail.co.uk/news/article-2218532/Amanda-Todd-Anonymous-names-man-drove-teen-kill-spreading-nude-pictures.html#ixzz2CC1ihYXx
http://www.theverge.com/2018/7/27/17616532/overwatch-league-of-legends-nba-nfl-esports
https://pdfs.semanticscholar.org/23a6/cdc72fa2a59d62ea94aa68cfe484982cf2b8
http://www.washingtonpost.com/wp-dyn/content/article/2008/01/21/AR2008012101340.html
Paper 649, http://digitalcommons.law.yale.edu/fss_papers/649.
Wiener, Norbert ([1950] 1954) The Human Use of Human Beings:
Cybernetics and Society (2nd edn.). New York: Doubleday Anchor.
Wilson, Robert A., and Lucia Foglia (2017) Embodied Cognition, in
Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy,
https://plato.stanford.edu/archives/spr2017/entries/embodied-
cognition.
Wong, Pak-hang (2013) Confucian Social Media: An Oxymoron? Dao:
A Journal of Comparative Philosophy 12: 283–96. DOI:
10.1007/s/1712-013-9329-y.
Woolf, Virginia (1929) A Room of One’s Own. New York and London:
Hogarth Press.
World Economic Forum (2016) The Global Gender Gap Report 2016,
www.weforum.org/reports/the-global-gender-gap-report-2016.
Wright, Paul J., Robert S. Tokunaga and Ashley Kraus (2016) A Meta-
Analysis of Pornography Consumption and Actual Acts of Sexual
Aggression in General Population Studies, Journal of
Communication 66: 183–205. DOI: 10.1111/jcom.12201.
Xu, Vicky Xiuzhong, and Bang Xiao (2018) China’s Social Credit
System Seeks to Assign Citizens Scores, Engineer Social Behaviour,
ABC News, April 2, www.abc.net.au/news/2018-03-31/chinas-
social-credit-system-punishes-untrustworthy-citizens/9596204.
Yan, Yunxiang (2010) The Chinese Path to Individualization, British
Journal of Sociology 61(3): 489–512.
Young, Iris Marion (2000) Inclusion and Democracy. Oxford
University Press.
Yu, Peter K. (2012) The Confucian Challenge to Intellectual Property
Reforms, WIPO Journal 4(1),
https://scholarship.law.tamu.edu/facscholar/670.
Zuboff, Shoshana (2019) The Age of Surveillance Capitalism: The
http://digitalcommons.law.yale.edu/fss_papers/649
https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition
http://www.weforum.org/reports/the-global-gender-gap-report-2016
http://www.abc.net.au/news/2018-03-31/chinas-social-credit-system-punishes-untrustworthy-citizens/9596204
https://scholarship.law.tamu.edu/facscholar/670
Fight for a Human Future at the New Frontier of Power. New
York: Public Affairs.
Index
4Chan 38
Aarseth, Espen 199
Abidin, Crystal 148, 152
abortion 239–40, 241, 251–2
Abramson, Jeffrey 167
absolutism 26, 28
see also ethical absolutism
Academia 130
academics / social media 130
ACEIE (Africa Centre of Excellence for Information Ethics) 279
ACM (Association for Computing Machinery) 216
active choice-plus 184–5, 190–1
activist movements 166–7
Adams, Carol J. 175
add-ons 40–2
advertising 38, 43, 111, 131, 134, 160, 164, 201
affirming the consequent 29n1, 235–6
Africa Centre of Excellence for Information Ethics (ACEIE) 279
African Declaration on Internet Rights and Freedoms 280
African Information Ethics 277, 279
African perspectives 21, 23, 34, 215, 276–83
see also ubuntu
agency 21, 135, 138, 175, 257
aggression: see violence
ahimsa (nonviolence) 227
Ahmadinejad, Mahmoud 157
AI (Artificial Intelligence)
algorithms 8, 19
deontology 217
ethical guidelines xiii, xvi, 25
eudaimonia 267–8
increasing role of xii
privacy 24
virtue ethics 217
well-being 25
Akemu, Ona 153
Albrechtslund, Anders 18, 59
Alexa 37–8
Alexander, Leigh 206
algorithms ix, 8, 19, 31, 164, 218
Allcott, Hunt 145
allemannsretten (“all people’s rights”) 105, 115, 116, 127, 248
alt porn 172, 207–8
Amazon 43, 177, 270
Ames, Roger 51, 257, 265, 273, 274
analogical arguments 91, 97–8, 123–4, 141–2, 197
analogue technologies 11, 12–14, 22–3
Anderson, Eric 256
Anderson, Yvonne xii
animism 193
anonymity online 3, 5, 37, 44, 87, 205–6
anonymizer software 44
Anonymous 2, 5–6
Antigone (Sophocles) 265–6
AoIR (Association of Internet Researchers) xvi–xvii, 25, 26
Aphrodite 192
Apology, The (Plato) 264
Apple 56, 132, 165, 177, 270
Arab Springs xii–xiii, 129, 157
Arab Winters xii, 129, 157
Arendt, Hannah 88
arguments 29, 80, 82, 88, 91, 97, 180
Aristotle
and African traditions 279
ethical pluralism 245
ethics as practice 209–10
eudaimonia 65–6n2, 262, 264
excellence 264, 274
individual/community 72
judgment 263–4
phronēsis xi, 25, 31–2, 97, 201, 218, 256–7
reason 261–2
virtue ethics 124, 249, 260–1, 266
art/pornography 176, 179
Artificial and Independent Systems 166
Artificial Intelligence: see AI
Ashman, Rachel 258n4
Association for Computing Machinery (ACM) 216
Association of Internet Researchers: see AoIR
atomic bombs example 224
attitudinal change 171, 179, 183, 204–5
Attwood, Feona 172, 182
Aufderheide, Pat 102
Augustine, Saint 177, 186, 192
authoritarianism 65, 160, 163–4
see also digital authoritarianism
autonomy
deontology 24, 189–90, 207–9
feminist 206
flourishing 283
freedom 21, 25, 64, 66, 135
gaming 206
“just meat” 229
Kant 25, 180, 229, 230, 252
personhood 188
pornography 175
RapeLay 208
rights 24
self 59–60
SEMs 180, 181
sexual identity 180, 181
see also relational autonomy
Bäcke, Maria 206
bank data 46, 54, 55, 69
Banks, David xvii
Baron, Naomi 20
Batman movie killings 200
Baumol, William J. 133
BDSM (Bondage-Discipline-Sadism-Masochism) 206, 207
Becker, Barbara 187, 201
Benhabib, Seyla 161
Bentham, Jeremy 133, 194, 220, 221–2, 223–4
Berbers, Yolande 24, 44, 56
bereaved parents 147–8
Berry, David 11, 14
Bester, Coetzee 279
Bewley, Susan 195
Bezos, Jeff 174
Bezos vs. American Media 174
Bielby, Jared 217, 282
Big Data xii, 8, 19, 44, 59, 131, 164
Birkner, Christine 11, 200
BitTorrent 45, 99
Black Mirror 58
Bleaney, Rob 2
Blinder, Alan S. 133
blogs 17, 77
Boateng, Boetema 99, 101, 122–3
body
Confucian ethics 271–2
dissatisfaction with 183
identity 271–2
information 53
“just meat” 190
materialism 186–7
mind 187, 256, 265
rights 241
self 256
sexuality 177n5, 186, 189–90, 192
social media 166
soul 146, 186
see also dualism
body-subject (Leibsubjekt) 187, 188, 189, 201
Bondage-Discipline-Sadism-Masochism (BDSM) 206, 207
books, printed 12, 112n2
Booth, Paul 11
Boss, Judith 243
both/and logic 28, 217, 259, 260
boyd, danah 36, 275
Braghieri, Luca 145
Braidotti, Rosi 14, 217
Brandeis, Louis 16, 61, 75
Breivik, Anders Behring 200
Bromseth, Janne 175
Brown, John Seely 152
Brown, Pat 170, 200
Brown, T. 21
Brownlee, Kimberley 123
Buchanan, Elizabeth 216
Buddhism
community well-being 248
and Confucianism 74–5
discontent 80–1
ethics 21
Hughes 267
individual 23, 72
person 53
privacy 53, 80
property rights 126–7
Pure Land tradition 66–7
relational self 63
sacredness of life 226
selfhood 67, 246–7
virtue ethics 260–1, 266–7
Bunz, Mercedes 19–20
Burgess, J. Peter 25, 57, 217, 267
Burk, Dan xvii, 36, 67, 69–70, 101, 112–13
BusinessGhana 106
Bynum, Terrell Ward 216, 217
California 70
Cambridge Analytica xiii, 8, 11, 164–5
Campbell, Heidi 18
Canonical Ltd 106
capitalism
communicative 131, 165
multinational 30
surveillance xiii, 166
Capurro, Rafael 217, 247, 277, 282
care 255, 261, 268, 269–70
see also ethics of care
carebots 258, 267, 268, 269–70
Carey, Benedict 145
Carey, James 158–9
Carlsen, Amanda 258n4
cartoons of Muhammad 225, 246–7
Cascone, Kim 11, 14
case-studies 34–5
CaTaC (Cultural Attitudes towards Technology and Communication) xvi
Categorical Imperative (Kant) 227–8
CC (Creative Commons) 104–5, 125–6
CDs xvii, 91, 92, 95, 120–1
celebrity photographs 16
censorship 182–3, 184
CEPE (Computer Ethics: Professional Enquiries) xvi
Chan, Joseph 245
cheating 57, 189
Chen, Zhen Troy 86, 89
Cheong, Pauline 53
Cheung, Ming 86, 89
child pornography 179, 183
child slavery 212
China, People’s Republic of
privacy 85–6
relational self 70–1
SCS xiii, 8, 20, 48, 57, 58, 71, 78, 166
Christianity 81–2, 226
Christman, John 77–8, 258
Cilliers, Liezei 280
citizens 11, 57–8, 129, 158, 160
civil rights movement 231
Coates, D. Justin 187
Coban, Aydin 6
Coeckelbergh, Mark 162, 173n2, 192
Coffey, Ann 184–5
Cohen, Julie 267
Cold War 224n1
Coleman, Kari Gwen 267
Columbine killings 200
commodification 88, 131, 134–5
commons 116, 124
see also Creative Commons (CC)
communication technologies 1, 159
see also ICTs
communication venues 38, 172–3
communications
digital media 17–22
online 5, 38, 140–1, 145, 149
selfhood 20–1
skills 142
Snowden 36
SNSs 17
communicative capitalism 131, 165
communitarianism 162–3, 167–8, 267
community
copying 113
eudaimonia 126
good 103, 113
harmony 47, 48, 70, 113, 125, 253, 273
and individual 67, 72
and self 278–9
well-being 25, 107, 113, 114, 126, 243–4, 248–9
compatibilism 187
complementarity, logic of 259, 260
complete sex 186–90, 197, 256
computer ethics 216, 263, 267
Computer Ethics: Professional Enquiries (CEPE) xvi
computer viruses 55, 99
conclusions 29, 31
Confessore, Nicolas 8
conflict minerals 153, 154, 212
Confucian ethics 21, 34, 215
and African traditions 279
body 271–2
Buddhism 74–5
community well-being 248
copying 90, 91–2, 113
copyright 275
digital media 275–6
ethical pluralism 245
Golden Rule 81–2
harmony 274, 279
heart-and-mind 257, 265, 274
human beings 271, 272–3
individual 23
knowledge transmission 125
privacy 67, 72, 115
property rights 118, 126–7
relational self 63
selfhood 246–7
virtue ethics 260–1, 266–7
xin 257, 265, 273
Congo, Democratic Republic of 153
Consalvo, Mia xvi, 192, 202–3
conscientious objection 226, 265–6
consent 15–16, 69–70
consequentialism
decision-making 220–1, 225
dementia, frontotemporal 256
ethical dilemma 232–3
illegal downloads 94, 96
limitations 221–3
pornography 180
state 225
stealing 95
utilitarianism 93, 219–25
wartime 223–4
consumers 11, 70, 160, 173, 191, 211–12
contentment 65–6n2, 139, 142–3, 226–7, 262, 266–7, 274
see also eudaimonia
convergence 12–16
Conway, Paul 256
copy protection schemes 99
copying
analogue 13
Confucianism 90, 91–2, 113
distribution 114, 275
ethical pluralism 119
Facebook 90
information 98
legality 116–17
property rights 91
ubuntu 91–2
copyleft approaches 91, 101–11, 114
copyright
Confucian ethics 275
deontology 101, 122–4
Sen 267
Thailand 112–13
ubuntu 111
utilitarianism 100–1
virtue ethics 124–6
Copyright Act, USA 112n2
copyright law
cosmopolitanism 20
deontology 123–4
disobeying 122, 123, 125–6
fair use 112
illegal downloads 30
photography 15
Pirate Party 100
USA/EU 91, 109, 112, 119, 165
corporations 70, 166, 177–8
cosmopolitanism 20
Couch, Danielle xvii
Couldry, Nick 11
Council of Europe 36, 47
courage 216 (virtue), 224n1, 268, 269, 270
Cox-George, Chantal 195
CPR numbers, Denmark 54
Creative Commons (CC) 104–5, 125–6
see also commons
Creative Commons Attribution-ShareAlike 108
creativity 60, 201
credit card accounts 54
credit-rating companies 57–8
Crito (Plato) 220, 226
Cultural Attitudes towards Technology and Communication (CaTaC) xvi
culture
death 151
deontology 137, 162
differences 52, 72–5, 82–5, 91–2, 162, 178, 215
ethical relativism 118–19, 234, 237
ethics 23
gender 52
generalizations 51–3, 111, 117, 126, 247–9
hybridization 52, 66, 237–8
identity 50
intellectual property 111–12
norms/values 26, 74, 215
Norway 49, 50
pornography 170–1, 175, 178–9
privacy 37, 45–9
selfhood 82
sexuality 207
shifts in 63–4
state 64, 137n2
stereotypes 49
Cumiskey, Kathleen M. 151
Custer’s Revenge 210, 211
Cuthbertson, Anthony 195
cyberbullying 1, 2, 9, 19, 131, 175
cyberlibertarianism 163–4
cybernetics 262–3
cyberspace 163, 186, 192
Dahlberg, Lincoln 8, 162–3
Daily Mail Reporter 7, 200
Danaher, John 19, 37, 214
Dark Web 171–2, 183
data commissioners, EU 44
Data Privacy Directives, EU 68–9, 79–80
data privacy protection 44, 48, 68, 69–70, 80–1
data-mining 43
Davis, Nicola 195
Davisson, Amber 11
Dean, Jodi 131, 165
death online xiv, 34, 128–9, 146–52
Death Online Research (DOR) 146–7
death threats 2, 6, 38
Debatin, Bernhard 61, 69, 88, 225, 246–7
DeCew, Judith 61
decisional privacy 75
decision-making 7, 160, 220–1, 225, 262
see also ethical decision-making
Declaration of Independence 229
dementia, frontotemporal 256
democracy
authoritarianism 65
citizens 129
communication technologies 159
deontology 168
digital media 157
feminist ethics 167, 168
freedom 167
law 266
libertarian 167–8
norms 9, 34
online 159
post-digital age 158, 167
rights 9
technology 158–69
virtue ethics 168
Dencik, Lina 57
Denmark
art/pornography 176
blogs 77
CPR numbers 54
data protection laws 48
freedom 48
Muhammad cartoons 246–7
pornography 175
privacy 42, 61–2
religion 48
deontology xv–xvi, 34, 225–32, 248
AI/Internet 217
autonomy 24, 189–90, 207–9
copyright 101, 122–4
culture 137, 162
democracy 168
empathy 256
equality 188, 197–8
ethical absolutism 229
EU 69
as framework 215
freedom 136
identity 21
illegal downloads 94, 95, 96, 250
individualistic 155
“just meat” 180, 181
Kant 227–9, 248
Netherlands 153
pornography 171, 173, 180, 191–2
privacy 3, 80
promise 222, 225, 232
property rights 126
religion 225–6
respect 189–90
Scandinavia 248
sexbots 193, 194
SNSs 133–6
stealing 93, 95
virtue ethics 92
Descartes, René 162, 186, 265, 271, 272
design
contextual awareness 276
ethically aligned xiv, 153, 166, 217, 268
slow 152, 154
virtue ethics xiv, xvi, 267
desire 170, 172, 187–8, 190, 194–5, 196–8
deskilling 195, 198, 269–71
determinism 83, 187
Dewey, Caitlin 203
dialogical approaches 26, 28, 78, 162
Dibbell, Julian 175, 205, 206
digital authoritarianism xiii, 58, 129, 157, 164, 165, 167
digital cameras 37
digital detox xiii, 11, 145, 149
digital divide 98, 107
digital footprint 54
Digital Futures project, European Commission 7
digital legacies 129
digital literacy 279
digital media 7
and analogue communication 11–12
communications 1, 17–22
Confucianism 275–6
convergence 12–16
democracy 157
entertainment 159–60
feminist ethics 259–60
internet-connected 3
studies of 11
digital media ethics 1, 24–5, 216–17, 258–60
Digital Millennium Copyright Act (DMCA) 99
Digital Religion 18
digital rights management (DRM) 99
digital technologies 9, 11, 12–14
discontent 66, 80–1
disobedience 9, 122–4, 125–6, 253, 254, 266
distribution 76, 98, 114, 275
DMCA (Digital Millennium Copyright Act) 99
dogmatism ix, 208, 209
DOR (Death Online Research) 146–7
Dota2 199, 210
doxxing 38
DRM (digital rights management) 99
Drotner, Kirsten 4, 9
dualism 187, 189, 190, 192, 259–60
Duhigg, Charles 43
Dungeons and Dragons 205
Düsterhöft, Isabel K. 214
DVDs xvii
EAG (Ethics Advisory Group) 24
Earp, Brian D. 19, 37
ecological systems 257
EDPS (European Data Protection Supervisor) 24
education, as extrinsic good 65–6
EFF (Electronic Freedom Foundation) 41, 99, 100
egalitarian dialogue 162
Egypt 166–7
Eichmeyer, Sarah 145
Eickelman, Dale F. 245
Eisenstein, Elizabeth 20
either/or thinking 4, 10, 12, 26–8, 32, 92, 116, 259
elderly care 270
Electronic Freedom Foundation (EFF) 41, 99, 100
electronics industry 154
Elshtain, Jean Beth 159, 160, 163–4
emails 17, 38–40
emancipation
gaming 207
pornography 175, 180, 205
technology 269
women’s 77, 176, 181, 207, 229, 230, 258n4
embodiment
analogue reception 14
emotions 256–7
phenomenology 186
relationships 256
resistance 167
sexbots 171, 192
sexual experiences 187–8
and virtual communities 163
emotion
artificial 196
care 255, 269
embodiment 256–7
feminist ethics 255–6
morality 265
virtue ethics 255–6
women 253–4
empathy
deontology 256
deskilling 198
friendship 139–40, 194–5
Habermasian and feminist ideals 165
solidarity 161–2
as virtue 143, 161n4, 168
virtue ethics 194
encryption software 44
Engels, Friedrich 162
Enli, Gunn xiii, 130, 145, 149
enslavement 51, 154, 162, 212, 229, 231
Entertainment Software Rating Board (ESRB) 201–2
entitlement rights 56–7
environmental ethics 249
equality 162, 180, 188, 197–8
gender 83–4, 176, 193, 258n4
eros 176, 214
ESRB (Entertainment Software Rating Board) 201–2
Ess, Charles
AoIR 25
commons 116
community well-being 243
copying 91
Creative Commons 125
democracy 168
digital immortality 146
empathy 195
existentialism xii, 269
Fairphone 156
ICE 217
Original Sin 177n5
political parties 164
post-digital era 11
privacy 59, 246
Pygmalion 192
relational self 21, 115
SNSs 130
essentialism 83, 255
ethical absolutism
abortion 239–40
deontology 229
and ethical relativism 119, 239–40, 243
human rights 229
as meta-ethical position 34, 37, 215
monism 26, 238–41
and pluralism 73
ethical choices 135, 155, 156, 212, 251
ethical decision-making 7, 82, 171, 186, 220–1, 249, 256, 262
ethical dilemmas 14, 218–19, 232–3
ethical guidelines xiii, xvi
ethical monism 26, 27, 28, 73–4, 209
see also ethical absolutism
ethical pluralism 241–7
both/and 28
copying 119
and ethical relativism 236–7, 282–3
greeting/parting rituals 234
as meta-ethical position 34, 37, 208, 215, 236
as middle ground 74
privacy 27, 72–3, 74, 75, 246
shared norms 119
strengths/limits 244–5
tolerance 235
ethical relativism 233–8
affirming the consequent 235–6
culture 118–19, 234, 237
and ethical absolutism 119, 239–40, 243
and ethical pluralism 236–7, 282–3
as meta-ethical position 34, 37, 127
and monism 26, 27, 28
moral judgment 237
privacy 73, 75
property rights 118
ethical toolkit xi–xii, 21, 215, 218, 245
ETHICOMP (Ethics and Computing) xvi
ethics
agency 21
Aristotle 209–10
consent/copyright 15–16
design xiv, 153, 166, 217, 268
digital media age 1, 23, 24
global 37
information 280
terms used 29n1
Ethics Advisory Group (EAG) 24
Ethics and Computing (ETHICOMP) xvi
ethics of care
digital media ethics 258–9
and feminist ethics 34, 92, 171, 215, 249
pornography 171
relations 77–8
responsibility 254–5
Ethics of Computer Games, The (Sicart) 201
ethnocentrism 51, 249
eudaimonia
AI 267–8
Aristotle 65–6n2, 262, 264
community 126
EU 267
ICT design 267
judgment 226–7
SNSs 143
Socrates 264
virtue ethics 139
see also contentment
European Commission 7, 153
European Convention on Human Rights 36
European Data Protection Supervisor (EDPS) 24
European Journal of Communication 5
European Union
copyright laws 91, 100–1, 109, 112, 119, 165
data commissioners 44
Data Privacy Directives 68–9, 79–80
data protection laws 68
deontological approach 69
ethical guidelines xiii
eudaimonia 267
GDPR xiii, 24, 44, 68–9, 71, 86
Google 45
KidsOnline survey 175–6
law/ethics 27
privacy 44, 61, 65–6, 75, 166
excellence
Aristotle 264, 274
Confucius 274–5
eudaimonia 126, 139, 264
gaming 203
human beings 261–2, 263
Socrates 274
virtue ethics 125, 168
virtues 141
exceptions to the rule 52, 232
exemplary person (junzi) 273–4, 275
existentialism xii, 269
Facebook xiii
communication 17
copying 90
deceased person’s page 148
friendship 128
intellectual property 90
“like” button 11, 145
micro-targeting 43
mood manipulation 8
“napalm girl” (Kim Phûc) 178
privacy 18, 56, 88
scandals 8, 130
surveillance 59
Terms of Use 90, 110
Todd, Amanda 2
withdrawal from 60, 145
fair use 112
fairness 154, 253
Fairphone xiv, 129, 153–5, 156, 203, 211–12, 213
Fairtrade 129, 154, 155, 212
fake news 8, 129, 161, 164, 166
faking it 196–7
FDL (Free Documentation Licenses) 109
Federalist Papers 158–9
FEMEN movement 176
feminism
autonomy in gaming 206
contemporary 177n4, 249
ethical pluralism 245
on Habermas 161
pornography 214
relational autonomy 230, 248, 283
second-wave 251, 257n4
sexualities 207
feminist ethics 251–60
complete sex 186–90
cultural differences 215
democracy 167, 168
digital media 259–60
emotion 255–6
ethics of care 34, 92, 171, 215, 249
on Habermas 161–2, 165
moral wisdom 265
pornography 171
relational aspects 21
selfhood 77
sexuality/identity 171, 187
violence 34
file-sharing 99, 115
filter bubbles 8, 164
Finneman, Niels Ole 124
Firefox 40–2, 107
First Amendment justifications 202
first-person shooter (FPS) 199
Floridi, Luciano
AI xiii, 25
Big Data 59
distributed morality 155
flourishing 267
Global Information Ethics 282
human dignity 267
Internet of Things 217
my information concept 53–4, 61
onlife 7
privacy 63
“Red” products 213
shared responsibility 135–6, 212
shopping Samaritan 55, 156
FLOSS (Free/Libre/Open Source Software)
copyleft approaches 91, 101–5, 114
Linux 105–7
in practice 107–11
ubuntu 277
flourishing
autonomy 283
Floridi 267
freedom of speech 182
individual/community 279
relationships 198
Vallor 194–5, 268
virtue ethics 24–5, 139, 143, 152
virtues 270
fluidity 17–18, 20
Fødselsnummer, Norway 54
Foglia, Lucia 256
FOSS (Free and Open Source Software) 102
Foucault, Michel 21, 58, 272
Foundations of the Metaphysics of Morals (Kant) 227–8
Fourth Amendment 61
FPS (first-person shooter) 199
France xiii, 248
Frankenstein (Shelley) 10, 192
Free/Libre/Open Source Software: see FLOSS
Free and Open Source Software (FOSS) 102
Free Documentation Licenses (FDL) 109
free software 90, 102–3, 104, 105, 106–7
Free Software Foundation (FSF) 102–3, 107, 108
freedom
African thought 278
autonomy 21, 25, 64, 66, 135
collective 11
contentment 274
corporate 70
democracy 167
Denmark 48
deontology 136
existentialism xii
of expression 9, 77, 174, 178, 183, 239, 247
in free software 102–3, 104, 107
as illusion 187
individual 8, 9, 11, 115, 163
internet 157
of opinion 66
of press 247
privacy 75
of speech 182, 183
utilitarianism 231
women 60, 77
Freedom House 58
Freelon, Deen xvii
#freethenipple 176, 258n4
friendship
communication skills 142
cyberbullying 131
empathy 139–40, 194
Facebook 128
online 129–37, 138–46, 165, 268
post-digital age 129, 138
selfhood 22
SNSs 128, 261
virtue ethics 128, 138–9
see also relationships
FSF (Free Software Foundation) 108
Fuchs, Christian 88
Gabriels, Katleen 19, 37
#Gamergate xiv, 38, 203, 212
Games Research Network listserv 213
gaming
Consalvo xvi, 202
creativity 201
as emancipation 207
feminist autonomy 206
mobile devices 199
phronēsis 211
professional 199–200
rape 205
and real life 206–7, 209
role-playing 199, 206
sex 204–6
skills 203–4
as toxic culture 202–3, 212
utilitarianism 200–1, 204
violence 7, 138, 170–1, 173, 199–201, 204–6
virtue ethics 209
virtues 210–11
Gandhi, Mahatma 122, 227, 231
Gawronski, Bertram 256
gay, lesbian, bisexual, transgendered, and/or queer (GLBTq) 174–5
GDPR (General Data Privacy Regulation) xiii, 24, 44, 68–9, 71, 86
Gehl, Robert W. 171–2
gender
culture 52
equality 83–4, 176, 193, 258n4
rules/fairness 253
sexuality 172, 174
stereotyping 255
see also masculinity; women
General Data Privacy Regulation: see GDPR
generalizations 51–3, 111, 117, 126, 247–9
Genesis, Book of 177n5
genocide 28, 237, 244
Gentile, Douglas A. 200–1
Gentzkow, Matthew 145
Gerber, Nina 85, 89
Gerber, Paul 85, 89
German Ideology, The (Marx and Engels) 162
Germany 60, 61–2, 66, 202, 248
Ghosh, S. 175
Gibson, William 146, 186
Gill, Rosalind 257n4, 258n4
Gilligan, Carol 251–2, 253–5
Glancy, Dorothy J. 16, 75
Glassman, Michael xvii
GLBTq (gay, lesbian, bisexual, transgendered, and/or queer) 174–5
Global Information Ethics (Floridi) 282
Global Kids Online 5
GNU Operating System 90, 105, 108–9
gold market 154
Golden Rule 81–2, 226
Good Samaritan 156, 213
goods
advertising 134
common 48, 124, 125
community/individual 248
copyright 100
extrinsic/instrinsic 65–6, 80
public 101, 124, 127
Socrates 264
Google xiii, 44, 45, 56, 165
Google Voice 37–8
GPS-equipped digital cameras 37
Grand Theft Auto V 206, 210
greased information 12, 16, 54
green electronics 154
Greene, Joshua D. 256
Greenleaf, Graham 71
Greenpeace 154
greeting/parting rituals 234
grieving 128–9, 147–8
Grodzinsky, Frances S. 170–1
Guess, Andrew 166
Guillory, Jamie E. 8
Habermas, Jürgen 88, 161, 165, 167, 168, 248
hackers 5, 8, 19, 43, 45, 56
Haddon, Leslie 175
Hallnäs, Lars 152
Hancock, Jeffrey T. 8
Hansen, Mette Halskov 68
Hård af Segerstad, Ylva 147, 217
Harding, Luk 178
Hargittai, Eszter 56, 87, 88–9
harm reduction 184–5
harmony
community 47, 48, 70, 113, 125, 253, 273
Confucius 274, 279
contentment 226
Japan 81
reason 267
ubuntu 48
well-being 263
see also eudaimonia
Harviainen, J. Tuomas xvi
health care 229–30
Health Insurance Portability and Accountability Act 69
health-tracking devices 37
hedonic calculus 221–2, 224n1
Hegel, Georg Wilhelm Friedrich 72
“Heinz dilemma” 254
Henaff, Marcel 160
Henderson, Kathrine Andrews 216
heuristic 52, 83, 126
Hick, Darren 112n2
Hildebrandt, Mireille 8, 9, 63, 266, 270
Hintz, Arne 57
Hjorth, Larissa 151
Hobbes, Thomas 78, 271
Hoffman, Samantha 58
Hoffmann, E. T. A. 9–10, 192
Holmes, James 200
Holocaust 237, 239
Holpuch, Armand 56
Holst, Catherine 193
Homo ludens 200
Hong Kong 70–1, 166–7
Hongladarom, Soraj
intercultural dialogue xvi
pluralism 74, 246
privacy 53, 72–3, 74, 80, 82, 88, 246
self 73–5
Thai ID 46
Horizon 2020 program 153
Hornung, Peter Michael 176
Hovde, Astrid Linnea 148–9
Howard, Phillip N. 157, 164
Hsu, Shang Hwa 202
Hughes, James 267
Huizinga, Johan 200
human beings 196, 261–2, 271–3
human dignity 25, 166, 267
human rights 36, 229, 230–1, 241, 280
Human Use of Human Beings: Cybernetics and Society, The (Wiener) 216
Hume, David 255
Hursthouse, Rosalind 264–5
IACAP (International Association for Computing and Philosophy) xvi
ICE (information and computing ethics) 24, 216–17, 282
Ice – Jeff on Top Pulling Out (Koons) 176
Iceland, Pirate Party 100
ICT4D (ICT for development) 279
ICTs (Information and Communication Technologies) xiv
design 267
dualism 259–60
eudaimonia 267
Scandinavia 124
state support 124
virtue ethics 267
Wiener 269
identity
body 271–2
commodification 88
culture 50
deontology 21
government information 54
group norms 51
literacy 21
privacy 22
relational 63, 81, 106
responsibility 21
selfhood 20–1, 60, 63
sexual 171, 172, 180, 181, 187
utilitarianism 21
identity theft 54, 55
IEEE (Institute of Electrical and Electronics Engineers) xv–xvi, 138, 166,
217, 267–8
IIE (Intercultural Information Ethics) 282
illegal downloads
consequentialism 94, 96
copyright law 30
deontology 94, 95, 96, 250
developing country 120–1
music industry 90, 93–4, 99
utilitarian analysis 94, 250
immortality, digital 146
In a Different Voice (Gilligan) 251
India 175, 230
see also Gandhi, Mahatma
indigenous traditions
community well-being 107, 248
cultural differences 178
privacy 67
ubuntu 277
US dominance 101, 123
individual
atomic 271
Buddhism 23, 72
and community 72, 279
Confucianism 23
deontology 155
freedom 8, 9, 11, 115, 163
moral agency 257
privacy 22, 48, 66, 67, 73, 115
property rights 122
rights 248, 252, 253
selfhood 21, 138, 217
utilitarianism 155
Indonesia 175
influencers 165
information
analogue technologies 13
body 53
as common good 124, 125
copying 98
digitized 17
distribution 98
ethics 280
Floridi on 53–4, 61
greased 12, 16, 54
personal/sensitive 42, 44, 53
privacy 37–8, 54–5, 75–6, 79
protection 43
Information and Communication Technologies: see ICTs
information and computing ethics: see ICE
information gathering 69–70
information sharing 14–15, 17, 45, 76
Innis, Harold 20
Instagram 59, 60, 129, 130, 140–1, 174
instant messaging 45
Institute of Electrical and Electronics Engineers: see IEEE
instrumentalism, technological 158
Intellectual Property (IP) 90, 91, 98–109, 110, 111–12, 117, 134
Intercultural Information Ethics (IIE) 282
International Association for Computing and Philosophy (IACAP) xvi
internet 3, 6–7, 19, 27, 157, 171–2, 216
Internet of Things: see IoT
Internet Protocol (IP) address 40, 42–3, 44
Internet Service Providers (ISPs) 184
Internet World Stats 17
intimate spheres (intimsfære) 42, 62, 80
IoT (Internet of Things)
emerging xii, 8
ethics xvi
privacy 24
Smart Cities 19–20, 55–6
virtue ethics 217
IP: see Intellectual Property
IP: see Internet Protocol
Iran 157
Islam 81–2, 245
isolationism, moral 237–8
ISPs (Internet Service Providers) 184
Japan 81, 192–3, 202, 224
see also RapeLay
Jaspers, Karl xi–xii
Jefferson, Thomas 229
Jenkins, Henry 15
Jensen, Jakob Linaa 19
Jesus Christ 264
Jin, Dal Yong 165–6, 178
Jóhannsdóttir, Ásta 176, 258n4
Johnson, Deborah 65–6
Judaism 81–2
judgments, ethical 29–32
consequences 7
discernment 226–7, 265
hedonic calculus 224n1
Midgley 237
phronēsis xiv, 25, 226, 256–7, 265
and politics 263–4
Jünger, Jakob xvii
“just meat”
autonomy 229
body 190
deontology 180, 181
materialism 187
means/ends, Kantian 188, 229
objectification 175, 180
rape 207
sexuality 190
justice 122–4, 227, 250, 252–4, 279
Jyllands-Posten 247
Kabloona community 237, 239, 243, 244
Kang, Cecilia xiii
Kant, Immanuel
autonomy 25, 180, 229, 230, 252
Categorical Imperative 227–8
deontology 227–9, 248
lies 231–2
means/ends 188, 229
promises 227–9
reason 261–2
Kasperowski, Dick 147
Keneally, Meghan 2, 6
Kennedy, Steve 154
Khandelwal, Swati 55
kidney disease 242–3, 244
KidsOnline survey 175–6
killing 241
Kimppa, Kai xvi
King, Martin Luther, Jr. 122, 123, 227, 231, 253
Kirwil, Lucyna 4–5
Kitiyadisai, Krisana 46, 66
knowledge transmission 125
Kobie, Nicole 57, 58
Kohlberg, Lawrence 252, 254
Kondon, Zsuzsanna 20
Koons, Jeff 176, 177
Kostka, Genia 8, 48, 57, 58
Kramera, Adam D. I. 8
Kraus, Ashley 7
Lagerkvist, Amanda xii, 152, 269
Lang, Fritz 192
Lange, Patricia G. 63
Latin America 102, 108, 277
Latonero, Mark 91
law
and conscience 239
democracy 266
digital technologies 9
disobedience 125–6, 253, 254
EU 27
just/unjust 122–4, 227, 250
Norwegian non-owners’ rights 115–16
platform imperialism 166
privacy protection 246
religion 225
women’s clothing 175
see also copyright law
Leaver, Tama 152
Levin, Sam 178
Levinas, Emmanuel 74
Levy, David 170, 193, 195
Lewis, Patricia 224n1
Lewis, Paul 11, 145
libertarian democracy 167–8
LibreOffice 107–8
licensing 108, 109, 114
lies 226, 231–2
Lightbeam add-on 40–2
Lim, Merlyna 128, 166–7, 175
Lindgren, Simon 11
Ling, Rich xvi, 19
Linkedin 130
Linux 105–7, 277
Liptak, Adam 202
literacy 21, 88, 279
Livingstone, Sonia 4–5, 131, 134, 175
Loccioni 153
Locke, John 229
logic, terms of 29
Lomborg, Stine 77, 130
Love and Sex with Robots (Levy) 193
loving 188, 190, 194
Lu, Jessica 152
Lü, Yao-Hui 67
Lüders, Marika 130
Luke, St 156
Lukesch, Helmut 202
MacAskill, Ewen 36
Mackenzie, Catriona 282–3
Majama, Koliwe 280
Malan, Beverley 279
Malaysia 166–7
Maner, Walter 216
manipulation 8, 160, 161, 164
Martens, H. 21
Martin, Daniel 184
Marwick, Alice 36, 56, 87, 88–9
Marx, Karl 162
masculinity 190, 212, 251
Massanari, Adrienne L. 38, 203
massively multiplayer online role-playing games (MMORPGs) 199
Massumi, Brian 14
materialism 186–7
Matich, Margaret 258n4
McArthur, Neil 214
McKenna, Michael 187
McLuhan, Marshall 20, 158
meat as term 186, 192
see also “just meat”
media log 131–2, 142–3
mediatization 173–4
Medium Theory 20
Meikle, Graham 19–20
memorial pages 149
Mendez, Mario F. 256
Menkiti, Ifeanyi A. 277
Messenger 129, 147
meta-ethical frameworks 34, 37, 215
absolutism (monism) 238–41
issues 198–9, 217
pluralism 241–7
relativism 233–8
meta-theory 33, 96, 200–1
Metropolis (Lang) 192
Meyrowitz, Joshua 20
microblogging 42, 171–2
Microsoft 56, 114
micro-targeting of advertising 43, 131, 160
Midgley, Mary 1, 215, 237–8
Mikkola, Mari 214
Mill, John Stuart 182–3, 220, 223–4
mind/body dualism 187, 189, 190, 256, 265
Mittelstadt, Brent Daniel 153, 216
MMORPGs (massively multiplayer online role-playing games) 199
mobile devices xvi, 151, 171–2, 199
Modern Warfare 2 200
monism 26, 27, 28, 215, 238–41
see also absolutism; ethical monism
Montaigne, Michel de 248
Moor, James H. 16, 54, 76
MOOs 205
moral panics 3, 4, 9, 10, 29
morality 155, 252, 253, 254–5, 265
Moseley, Raam 99
mourning online 128, 149, 150–1
Moyer, Melinda Wenner 200–1
MP3 13–14
MUDs 205
Muhammad, Prophet 225, 246–7
Mukwege, Denis 237, 239
Mullins, Phil 17
Murad, Nadia 237, 239
Musgrave, Frank W. 242–3
Musi (“no-self”) 66–7
music industry 90, 93–4, 99
see also illegal downloads
music recording equipment 12–13
Myska, Bjørn 196, 231
Nakada, Makoto 67
“napalm girl” (Kim Phûc) 178
Nash, Victoria 173, 205
National Committees for Research Ethics (NESH) 62
national identity cards 46–8
National Rifle Association 4
National Security Agency xiii, 8
#neda 157
NESH (National Committee for Research Ethics) 62
Netflix 103
Netherlands 153
netiquette 149, 150–1
neurobiology 256
Neuromancer (Gibson) 146, 186
New Media 7
newspapers 12, 16
Ní Bhroin, Namh 62n2
nirvana 67
Nissenbaum, Helen 62, 76, 80
Nixon, Paul G. 214
Nobel Peace Prize-winners 237, 239, 244
Noddings, Nell 255
nonviolence 227, 231
norms
applications 27, 28
culture 26, 74, 215
democracy 34
group 51
misjudgments 50, 51
shared 119, 243–4, 245
universally valid 238–9
and values 31–2
Nørskov, Marko 194
Norway
allemannsretten 105, 115–16, 248
culture 49, 50
Fødselsnummer 54
NESH 62
Outdoor Recreation Act 116
privacy 42, 61–2
privacy protection 86
public photographs 15
sexting 176
tax records 48
NoScript 41–2, 44
no-self (musi) 66–7
NSA (National Security Agency) databases 36
Nussbaum, Martha 266, 267
Nyhan, Brendan 166
Nyholm, Sven 19, 37
objectification 175, 180
obligations, supererogatory 156
obscenity 190
offline time 18, 60, 275–6
Øian, Hogne 115–16
Ólafsson, Kjartan 176
Olivetti 153
Ong, Walter 20
Onlife 7
online communication 5, 38, 145
online democracy 159
Open Source Initiative (OSI) 102, 106–7, 277
operating system 105n1
orality 20–1, 158, 159
Original Sin, doctrine of 177, 177n5, 181
Ortega y Gassett, José 269
OSI: see Open Source Initiative
Other 74, 188, 196, 237
Outdoor Recreation Act 116
outliers 52
pacifism 226, 227, 231
Pane, Lisa Marie 4
Papacharissi, Zizi 151–2, 165
Pariser, Eli 8, 164
Parsons, Elizabeth 258n4
Passonen, Susanna xvi, 170, 172, 177, 181–2
passwords 55, 56
Paterson, Barbara 216, 277–8
patience 139–40, 141–2, 143, 194
patriarchy 176, 255, 258n4
Patrignani, Norberto 152, 153, 154
Paul, Christopher A. 213–14
PC (personal computer) 216
peer-to-peer networks 45, 94, 99
Perlroth, Nicole 55
perseverance (virtue) 139–40, 141–2, 143, 194
personal computer (PC) 216
personality, right to 60, 66
personhood 53, 188, 278
persuasive technologies 131
Petrov, Stanislav 224n1
Pew Research Center 5
phenomenological view 186–90
Phiri, Sam xvii
photographs 15, 16
phronēsis
Aristotle xi, 25, 31–2, 97, 201, 218, 256–7
decision-making 262
gaming 211
judgment xiv, 25, 226, 256–7
moral wisdom 265
practice of 210
relational autonomy 265
Socrates 256–7, 266
technomoral wisdom 261, 268
Phûc, Kim (“napalm girl”) 178
Piaget, Jean 253
PINs 55
piracy 90, 99, 120–2
Pirate Bay 99
Pirate Party 99–100
plagiarism 280
platform imperialism 165–6, 178
Plato 159, 214, 245
Apology, The 264
Crito 220, 226
Republic 226, 227, 262–3
pleasure 194, 221–2
plebiscitism 160, 161, 163–4, 167–8
pluralism 27, 28, 73
see also ethical pluralism
polarization of thinking 10, 26
Politiken 176
Ponte, Christina 4–5
Porn Studies 172
pornography xvi, 179
academic studies 172
art 176, 179
autonomy 175
blocking 184–5, 190–1
communication venues 172–3
consequentialism 180
consumption 173
cosmopolitanism 20
culture 170–1, 175, 178–9
cyberbullying 9
Denmark 175
deontology 171, 173, 180, 191–2
emancipation 175, 180, 205
ethics of 170–8, 181–2
exposure to 179–80, 183, 190, 208
feminism 171, 214
producers/consumers 172, 257n4
religion 177, 181
Scandinavia 175, 177
self-commodification 131
SNSs 171–2
USA 177–8
utilitarianism 171, 173, 180, 182–6
victimization 184
violence 7, 22, 34, 179
virtue ethics 171, 191–2, 268
see also alt porn; child pornography; revenge porn
porntube 182
Porte de Choisy 174
post-digital age xiii
democracy 158, 167
ethical life 11–22
friendship 129, 138
media xiv, 14
offline time 18
slow technology 152–3
well-being 18
post-feminist era 176, 257n4
Postman, Neil xii
premises 29
Pretty Good Privacy 44–5
privacy
accessibility 75
AI 24
anonymity 3, 37
Buddhism 53, 80
California 70
China 85–6
collective 22, 59, 66, 115
Confucianism 67, 72, 115
cosmopolitanism 20
creativity 60
culture 37, 45–9
Denmark 42, 61–2
deontology 3, 80
ethical pluralism 27, 72–3, 74, 75, 246
ethical relativism 73, 75
as expectation 61
Facebook 18, 56, 88
freedom 75
global metropolis 53–6
globalization 67–8
group 19–20, 59, 63, 115
identity 22
indigenous traditions 67
individual 22, 48, 66, 67, 73, 115
information 37–8, 54–5, 75–6, 79
intimate spheres 42
law 246
newspapers 16
Norway 42, 61–2
online 42
private life 47, 59–60, 61–2, 65–71, 72–5, 80
relational self 70–1, 76
as right 3, 9, 24, 36, 47, 66, 69, 133–4, 166
SCS 71, 78
selfhood 22, 58, 68, 75–8, 114–15, 272
smartphones 16
state 47, 69, 72
ubuntu 36, 47, 48, 49, 62–3, 115
USA/EU 44, 61, 65–6, 75, 166
utilitarianism 4, 80
violated 55–6, 174
virtue ethics 267
Privacy Badger 41, 42
privacy literacy 88
“privacy not included” 87
privacy paradox 85–9
privacy protection 56, 86, 166, 246
private life (privatlivet) 42, 47, 59–60, 61–2, 65–71, 72–5, 80
produsage sites 171–2
profitability 70, 102
promises 222, 225, 227–9, 232, 241
property 83, 101, 114, 115–16
property rights
Buddhism 126–7
Confucianism 118, 126–7
copying 91
deontology 126
ethical relativism 118
exclusive/inclusive 22, 103–4, 113–14, 117, 258
Fourth Amendment 61
individual 122
ubuntu 105, 126–7
utilitarianism 126, 231
virtue ethics 126–7
see also Intellectual Property
Protestant Reformation 21, 272
public goods 101, 124, 127
public sphere 157, 165, 166, 172
Pure Land tradition 66–7
Pygmalion 192
Quantified Relationship (QR) 21
Rachels, James 75, 76
racism 83, 167, 235
radio 12, 21, 61
Ramose, Mogobe B. 277
ransomware 55
rape
Custer’s Revenge 210
gaming 205
“just meat” 207
marital 197
threats 38
violence 171, 179
war 237
see also RapeLay
rape fantasies 206, 207
RapeLay 202, 205, 206, 208, 210
Raspberry Pi 106
Raymond, Eric 102
Reading, Anna 174
reason 253–4, 256, 261–2, 263–4, 267
Recording Industry Association of America: see RIAA
“Red” products 155, 213
Reddit 38, 203
Redström, Johan 152
Reiffler, Jason 166
relational autonomy
Christman 77–8
distributed responsibility 155
either/or thinking 116
feminism 230, 248, 283
phronēsis 265
self 64, 115, 213, 259
sharing 258
virtue ethics 21, 78
relational self 21, 63, 70–1, 76, 113, 115
relationships
close 62
embodiment 256
flourishing 198
human beings 272–3
identity 63, 81, 106
interdependent 257
interpersonal 254
intimate 188–9, 194
technology 173n2
virtue ethics 261
relativism 26, 28
see also ethical relativism
religion 48, 177, 181, 225–6
Republic (Plato) 226, 227, 262–3
reputation right 71
Research Gate 130
respect 189–90, 196, 197–8
responsibility
distributed 23, 136, 155, 156, 212, 213
ethics of care 254–5
identity 21
shared/individual 135–6, 212
revenge porn 15, 19
reward–punishment 252
Rheingold, Howard 159, 160, 163
RIAA (Recording Industry Association of America) 90, 100, 122
Richardson, Kathleen 193, 194, 198
Ricoeur, Paul 248
rights
absolute 231
autonomy 24
body 241
democratic norms 9
equality 180
individual 248, 252, 253
justice 254
negative/positive 229–30
privacy 3, 9, 24, 36, 47, 66, 69, 133–4, 166
utilitarianism 126, 231
violated 230–1
see also allemannesretten; human rights; property rights
Robinson, Jessica 86n7
Robo-philosophy conferences xvi
robots 138, 170, 192–3, 267
Rohner, Ronald P. 52
role-playing 199, 206
Romm, Tony xiii, 44
Roose, Kevin xiii, 18
Rosemont, Henry, Jr. 51, 257, 265, 271, 273, 274
Rosenstein, Justin 11, 145
Rouvroy, Antoinette 19–20
Ruddick, Sara 186–90, 191, 197, 255, 256
Rúdólfsdóttir. Annadis G. 176, 258n4
Rusbridger, Alan 36
Rwanda 237
Sabra, Jakob Borrits 128, 149
sacredness of life 226
Sandmann, Der (Hoffmann) 9–10, 192
Satariano, Adam 55
#sayhername 167
scandals xiii, 8, 11, 19, 130, 164–5
Scandinavia
allemannsretten 105, 248
deontology 248
gender equality 83–4, 193
ICTs 124
pornography 175, 177
Schmücker, Reinold 112n2
school shootings 200
Schwartz, Margaret 167
Schwartz, Shalom H. 229, 230
Screen Time, Apple 132
SCS (Social Credit System)
digital authoritarianism xiii, 58
privacy 71, 78
resistance 48
surveillance 8, 20, 57, 78, 166
security cameras 57
segregation laws 253
self
autonomy 59–60
body 256
commodification 131, 134–5
community 278–9
as illusion 80–1
relational autonomy 64, 115, 213, 259
technology of 21, 272
see also body-subject; relational self; selfhood
self-defense 226, 241
selfhood xvi
Buddhism 67, 246–7
communications 20–1
Confucianism 246–7
culture 82
friendship 22
identity 20–1, 60, 63
individual 21, 138, 217
privacy 22, 58, 68, 75–8, 114–15, 272
property 114, 115
relational 20–1, 63, 76–7, 115, 138, 155, 213, 217, 261
self-restraint 88
SEMNetporn 172
SEMs (sexually explicit materials)
active choice-plus 184–5
autonomy 180, 181
cyberbullying 175
diversity 179, 182
internet-connected media 171, 179
marginalized sexualities 174
mediatization 173
Netporn 172
objectification 175
producers/consumers 172, 185
virtue ethics 138
Sen, Amartya 267
September 11, 2001 attacks 57
Sesame 57
sex
complete 186–90, 197, 256
desire 170, 172, 187–8, 190, 194–5, 196–8
gaming 204–6
phenomenology 186–7
pre-marital 241
violence 204–6, 268
virtual 205–6
sexbots xii, xiv, 34
advantages 195–6
cosmopolitanism 20
deontology 193, 194
desire 194, 195, 197
deskilling 269, 270
development of 171, 192, 193, 270
embodiment 171, 192
hacked 195
Sullins 214
utilitarianism 193–4, 198
virtue ethics 138, 193, 195, 196
sexting 19, 174, 176
sexual identity 171, 172, 180, 181, 187
sexual violence 5, 7, 179, 184, 202, 203, 205, 237
sexuality
attitudes 179, 207
body 177n5, 186, 189–90, 192
children/adolescents 175
culture 207
feminism 207
gender 172, 174
identity 171, 172, 180, 181, 187
“just meat” 190
marginalized 174
patriarchy 258n4
preferences 175
sexually explicit materials: see SEMs
Shahbaz, Adrian xiii, 58, 157
sharing 108–9, 258
Shelley, Mary 10, 192
Shipira, Jill S. 256
shooting of student 157
shopping Samaritan 155, 156
Shutte, Augustine 277
Sicart, Miguel 201, 209–10
Signal 45
Sigot, Nathalie 221
Simon, Judith 23, 135–6, 155
Sinnott-Armstrong, Walter 221
Sinnreich, Aram xvii, 91
Siri 37–8
slow technology xiv, 11, 129, 152–3, 154
smart assistants 37–8
Smart Cities 19–20, 55–6
smart ID project 45–6, 48
smart mobs 18
smartphones
changing text 17–18
conflict minerals 153, 154
digital codes 14
information-sharing 45
internet access 172
photographs 15
privacy 16
tracking 18–19, 37–8
Smith, Aaron 86
Snapchat 15, 17, 60, 130, 140–1, 174
Snowden, Edward xiii, 8, 36, 38, 57
SNSs (social network sites) 130
anonymity 5
commodification 88
communication 17
death 151–2
deontology 133–6
ethical choices 135
eudaimonia 143
filter bubbles 164
friendship 128, 261
negative/positive experiences 132–3
pornography 171–2
redesigning 276
sexting 174
Terms of Service 134
Terms of Use 110–11
utilitarianism 131–2, 136–7
Vallor on 129–30, 135, 139–41
snuff films 179
Social Credit System: see SCS
social media 8, 129–30, 144–5, 166, 261
social network sites: see SNSs
Social Security numbers 54
Socrates
Crito 220
eudaimonia 264
excellence 274
judgment 263–4
paraphrased 78
phronēsis 256–7, 266
reason 261–2
virtue ethics 249, 260–1
software filtering 208
solidarity 150, 161–2, 167, 168
Solon, Olivia 11
Sophocles 265–6
Søraker, Johnny 49
soul 146, 186, 193, 227, 264, 271, 272
South Korea 202
Spencer, Michael K. xiii
Spiekermann, Sarah 153, 267
Spotify 103
Stahl, Bernd Carsten 153, 216, 220, 247–8, 249
Staksrud, Elisabeth 4–5, 176, 176n3
Stald, Gitte 175
stalking 2, 3, 5–6
Stallman, Richard 90, 102–3, 104
state
consequentialism 225
culture 64, 137n2
entitlement rights 56–7
human rights 229
ICT support 124
information gathering 69–70
justice 253
privacy 47, 69, 72
surveillance 8, 56–8, 59
trust in 86, 87
stealing 91, 92, 93, 95
Steele, Catherine 152
steganography 275
stereotyping 49, 51–2, 53, 255
streaming 40, 103, 160
Stromer-Galley, Jennifer 164
Strong, Tracy 160
subordination of women 193, 207, 255
Sui, Suli 71
suicide 2–3, 6, 131, 150, 243
Sullins, John 196, 214
Sundén, Jenny 175
Sunstein, Cass 164
Super Columbine Massacre RPG! 210
supererogatory obligations 213
surveillance
Arab Springs xii–xiii
of citizens 57
Facebook 59
SCS 8, 20, 57, 78, 166
state 8, 56–8, 59
total 20, 157
voluntary 18, 59, 174
surveillance capitalism xiii, 166
Surveillance Self-Defense 87
Svarverud, Rune 68
Sweden 100, 147–8
Syvertsen, Trine xiii, 130, 145, 149
Taiwan 108
Tamura, Takanori 67
Tang, Raymond 71
Target chain store 43
Tavani, Herman T. 29n1, 75–6, 124, 125–6, 217
Taylor, Linnet 59, 63
techno-liberation 162
technology
democracy 158–69
emancipation 269
relationships 173n2
of the self 21, 272
technomoral virtues 261, 268–9, 276
techno-utopianism 164
Telegram 45
teleological approach 248
television 12, 18, 21, 158, 159
Terms of Service (ToS) 134
Terms of Use 90, 110–11
terrorism 56–8
Thailand 45–6, 48, 53, 72, 80, 112–13
Thomson, Judith Jarvis 156
Thorn, Clarisse 175, 206
Thorseth, May 162
Thunderbird 107
Tian (natural order) 273
Time 175
Timmermans, Job 153, 216
Todd, Amanda 2–3, 4, 6, 131
Tokunaga, Robert S. 7
tolerance 234, 235, 236–7, 243–4
Tong, Rosemarie 255
Tor 44, 45
torture 179, 278
Torvalds, Linus 105
ToS (Terms of Service) 134
toxic masculinity 212
tracking 18–19, 37–8
trial by Internet 6–7, 19
Trump, Donald 8, 160
Tsan, Amie 55
tube sites 182
tumblr 2
Tunisia xii, 157, 166–7
Turkle, Sherry 270
Tutu, Desmond 278
Twitter 17, 42, 59
tyranny of the majority 159
Ublock Origin 41
ubuntu 106
community/individual 67
copying 91–2
copyright 111
FLOSS 277
harmony 48
privacy 36, 47, 48, 49, 62–3, 115
property rights 105, 126–7
relational self 113
virtues 278–9
Ubuntu distribution of Linux 106–7, 277
United Kingdom 242–3
United Nations: Universal Declaration of Human Rights 229
United States of America
consumers 70
copyright law 91, 100–1, 109, 112, 119, 165
data privacy protection 69–70
dominance 101, 123
ESRB 201–2
Facebook fined xiii
First Amendment 202
Fourth Amendment 61
health care 229–30
kidney disease 242–3
national identity card 47–8
National Security Agency xiii, 8
pornography 177–8
presidential elections 8, 160, 164
privacy 61, 65–6, 75, 94, 166
public photographs 15
Social Security numbers 54
Thailand compared 53
utilitarianism 70
Universal Declaration of Human Rights 229
universalizing maxim (Kant) 227–8, 233
Unix 105
utilitarianism 34
consequences 7, 195–6
consequentialism 93, 219–25
copyright 100–1
cost–benefit analysis 184–6, 204, 218–20
domination of 248
ethically aligned design 217
as framework 215
freedom 231
gaming studies 200–1, 204
greatest good for greatest number 230–1
identity 21
illegal downloads 94, 250
individualistic 155
limitations 221–3
nation/culture 137
pornography 171, 173, 180, 182–6
privacy 4, 80
profitability 70
property rights 126, 231
sexbots 193–4, 198
SNSs use 131–2, 136–7
US approach 70
virtue ethics 92
wartime 230–1
utils 132–3n1, 173, 184, 195–6, 221–2
Vallor, Shannon
deskilling 269–70, 271
empathy 194–5
existentialism xii
flourishing 194, 268
on SNSs 129–30, 135, 139–41
technomoral virtues 261, 268–9, 276
virtue ethics xiv, xv–xvi, 88, 246, 266
virtues 139–42, 143
van Abel, Bas 153
van der Mark, Peter 153
van der Sloot, Bart 59, 63
van der Velden, Maja xvi, 154
van Wynsberghe, Aimee 258
Verbeek, Peter-Paul 173n2
Verrier, Antonin 174
victimization 184
video cameras 45
video gaming 170
vigilantes 3
Vignoles, V. L. 53
violence
active choice-plus policy 190–1
cosmopolitanism 20
feminist ethics 34
gaming 7, 138, 170–1, 173, 199–201, 204–6
pornography 7, 22, 34, 179
racism 167
rape 171, 179
self-defense 226
sex 204–6, 268
toward women and girls 7, 167, 173
see also sexual violence
virtual assistants 270
virtual communities 163
Virtual Private Networks (VPNs) 40
virtual sex 205–6
virtue ethics xv–xvi, 34, 215, 260–71
AI / Internet of Things 217
Aristotle 124, 249, 260–1, 266
Buddhism 260–1, 266–7
Confucianism 260–1, 266–7
copyright 124–6
democracy 168
deontology 92
design of ICTs xiv, xvi, 267
emotion 255–6
empathy 194
ethical dilemmas 14
eudaimonia 139
excellence 125, 168
flourishing 24–5, 139, 143, 152
friendship 128, 138–9
gaming / everyday life 209
global 246, 260–1
Golden Rule 81–2
loving 188, 190
pornography 171, 191–2, 268
property rights 126–7
relational 21, 78
relationships 261
SEMs 138
sexbots 138, 193, 195, 196
Socrates 249, 260–1
utilitarianism 92
Vallor xiv, xv–xvi, 88, 246, 266
well-being 152–3
virtues
deskilling 270–1
empathy 143, 161–2, 168
excellence 141
flourishing 270
gaming 210–11
technomoral 261, 268–9, 276
ubuntu 278–9
Vallor 139–42, 143
Volkamer, Melanie 85, 89
VPNs (Virtual Private Networks) 40
Wahl-Jorgensen, Karin 57
Wakabayashi, Daisuki 70
Wall, John 266
Wallen, Jack 41
Wang, Tom 276
Warburton, Nigel 183, 247
Ward, Stephen J. A. 183
Warren, Karen J. 245
Warren, Lydia 2, 6
Warren, Samuel 16, 61, 75
wartime 223–4, 230–1, 237
web-browsing 40–1, 43
Webster, Andrew 200
WeChat 86
Weiser, Mark 152
well-being
AI society 25
community 25, 107, 113, 114, 126, 243–4, 248–9
and contentment 139, 142, 143, 262
harmony 263
post-digital 18
slow technology 152
virtue ethics 152–3
see also eudaimonia
Wen, Ming-Hui 202
Westlund, Andrea 78
Weston, Anthony 29n1
WhatsApp 45, 129, 147
WhatsMyIP website 40
Wheeler, Deborah 157
White, Aoife 44
Whitehouse, Diane 152, 153, 154
Whiteman, Gail 153
Whitman, James Q. 60, 61, 66, 70
Wichowski, Amber 164
Wiener, Norbert 216, 263, 267–8, 269
wifi networks 55
Wikipedia 108–9
Williams, Nancy 255
Wilson, Robert A. 256
wisdom 261, 265, 268
see also phronēsis
women
clothing laws 175
emancipation 77, 176, 181, 207, 229, 230, 258n4
emotion 253–4
freedom 60, 77
gaming 207
objectification 175
oppression of 237
reduction in harm against 184–5
sexual violence against 202, 203
subordination of 193, 207, 255
suffrage 229
violence against 7, 167, 173
Wong, Julie Carrie 178
Wong, Pak-hang 275–6, 282
Woolf, Virginia 59–60
World Economic Forum 83–4
World of Warcraft 199, 200
World Values Survey 84–5
World Wide Web 216
Wright, Paul J. 7
writing 21, 272
Wu, Muh-Cherng 202
Xiao, Bang 57
xin, Confucianism 257, 265, 273
Xu, Vicky Xiuzhong 57
Yan, Yunxiang 68
Young, Iris Marion 161
YouTube 2, 17, 42, 171–2
Yu, Peter K. 90, 113, 125
Zuboff, Shoshana xiii, 166
POLITY END USER LICENSE AGREEMENT
Go to www.politybooks.com/eula to access Polity’s ebook EULA.
http://www.politybooks.com/eula
Series title
Title page
Copyright page
In memoriam
Foreword by Luciano Floridi
Preface to the Third Edition
Notes
Acknowledgments
1 Central Issues in the Ethics of Digital Media
Chapter overview
Case-study: Amanda Todd and Anonymous
Introduction
(Ethical) life in the (post-)digital age?
1. Digital media, analogue media: convergence and ubiquity
2. Digital media and “greased information”
3. Digital media as communication media: fluidity, ubiquity, global scope, and selfhood/identity
Digital media ethics: How to proceed?
Is digital media ethics possible? Grounds for hope
How to do ethics in the new mediascape: Dialogical approaches, difference, and pluralism
Further considerations: Ethical judgments
Overview of the book, suggestions for use
Chapter arrangement, reading suggestions
Case-studies; discussion/reflection/writing/research questions
Notes
2 Privacy in the (Post-)Digital Era?
Chapter overview
Information and privacy in the global digital age
“Privacy” and anonymity online – is there any?
Interlude: Can we meaningfully talk about “culture?”
“Privacy” in the global metropolis: Initial considerations
You don’t have to be paranoid – but it helps …
If you’re not paranoid yet … terrorism and state surveillance
“Privacy” and private life: Changing attitudes in the age of social media and mobile devices
“Privacy” and private life: Cultural and philosophical considerations
“Privacy” and private life: First justifications, more cultural differences – transformations and (over-?)convergence
“Privacy” and private life: Cultural differences and ethical pluralism
Philosophical and sociological considerations: New selves, new “privacies?”
1. Culture?
2. The privacy paradox
Notes
3 Copying and Distributing via Digital Media: Copyright, Copyleft, Global Perspectives
Chapter overview
The ethics of copying: Is it theft, Open Source, or Confucian homage to the master?
Intellectual property: Three (Western) approaches
(a)Copyright in the United States and Europe
(b)Copyleft/FLOSS
FLOSS in practice: the Linux operating system
FLOSS in practice
2. Intellectual property and culture: Confucian ethics and African thought
Notes
4 Friendship, Death Online, Slow/Fair Technology, and Democracy
Chapter overview
Friendship online? Initial considerations
Friendship online: Additional considerations
Friendship – and death – online
Slow technology and the Fairphone
Case-study: Are you ethically obliged to purchase a Fairphone?
Digital media and democratization: First considerations
Democracy, technology, cultures
Notes
5 Still More Ethical Issues: Digital Sex, Sexbots, and Games
Chapter overview
Introduction: Is pornography* an ethical problem – and, if so, what kind(s)?
Pornography*: More ethical debates and analyses
Pornography* online: A utilitarian analysis
“Complete sex” – a feminist/phenomenological perspective
Sex with robots, anyone?
Now: What about games?
Sex and violence in games
Notes
6 Digital Media Ethics: Overview, Frameworks, Resources
Chapter overview
A synopsis of digital media ethics
Basic ethical frameworks
1. Utilitarianism
Strengths and limits
(a)How do we numerically evaluate the possible consequences of our acts?
(b)How far into the future must we consider?
(c)For whom are the consequences that we must consider?
2. Deontology
Difficulties …
3. Meta-ethical frameworks: Relativism, absolutism (monism), pluralism
Ethical relativism
Ethical absolutism (monism)
Beyond relativism and absolutism: Ethical pluralism
Strengths and limits of ethical pluralism
4. Feminist ethics
Applications to digital media ethics
5. Virtue ethics
Virtue ethics: sample applications to digital media
6. Confucian ethics
Confucian ethics and digital media: sample applications
7. African perspectives
Applications
Notes
References
Index
End User License Agreement
CONTENTS
Cover
Front Matter
Preface
Introduction
Everyday Coding
Move Slower …
Tailoring: Targeting
Why Now?
The Anti-Black Box
Race as Technology
Beyond Techno-Determinism
Beyond Biased Bots
Notes
1 Engineered Inequity
I Tinker, Therefore I Am
Raising Robots
Automating Anti-Blackness
Engineered Inequity
Notes
2 Default Discrimination
Default Discrimination
Predicting Glitches
Systemic Racism Reloaded
Architecture and Algorithms
Notes
3 Coded Exposure
Multiply Exposed
Exposing Whiteness
Exposing Difference
Exposing Science
Exposing Privacy
Exposing Citizenship
Notes
4 Technological Benevolence
Technological Benevolence
Fixing Diversity
Racial Fixes
Fixing Health
Detecting Fixes
Notes
5 Retooling Solidarity, Reimagining Justice
Selling Empathy
Rethinking Design Thinking
Beyond Code-Switching
Audits and Other Abolitionist Tools
Reimagining Technology
Notes
Acknowledgments
Appendix
References
Index
End User License Agreement
Figures
Introduction
Figure 0.1 N-Tech Lab, Ethnicity Recognition
Chapter 1
Figure 1.1 Beauty AI
Figure 1.2 Robot Slaves
Figure 1.3 Overserved
Chapter 2
Figure 2.1 Malcolm Ten
Figure 2.2 Patented PredPol Algorithm
Chapter 3
Figure 3.1 Shirley Card
Figure 3.2 Diverse Shirley
Figure 3.3 Strip Test 7
Chapter 5
Figure 5.1 Appolition
Figure 5.2 White-Collar Crime Risk Zones
RACE AFTER TECHNOLOGY
Abolitionist Tools for the New Jim Code
Ruha Benjamin
polity
Copyright © Ruha Benjamin 2019
The right of Ruha Benjamin to be identified as Author of this Work has been asserted in
accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2019 by Polity Press
Polity Press
65 Bridge Street
Cambridge CB2 1UR, UK
Polity Press
101 Station Landing
Suite 300
Medford, MA 02155, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and
review, no part of this publication may be reproduced, stored in a retrieval system or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or
otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-2643-7
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Benjamin, Ruha, author.
Title: Race after technology : abolitionist tools for the new Jim code / Ruha Benjamin.
Description: Medford, MA : Polity, 2019. | Includes bibliographical references and index.
Identifiers: LCCN 2018059981 (print) | LCCN 2019015243 (ebook) | ISBN 9781509526437
(Epub) | ISBN 9781509526390 (hardback) | ISBN 9781509526406 (paperback)
Subjects: LCSH: Digital divide–United States–21st century. | Information technology–Social
aspects–United States–21st century. | African Americans–Social conditions–21st century. |
Whites–United States–Social conditions–21st century. | United States–Race relations–21st
century. | BISAC: SOCIAL SCIENCE / Demography.
Classification: LCC HN90.I56 (ebook) | LCC HN90.I56 B46 2019 (print) | DDC
303.48/330973–dc23
LC record available at https://lccn.loc.gov/2018059981
The publisher has used its best endeavours to ensure that the URLs for external websites
referred to in this book are correct and active at the time of going to press. However, the
publisher has no responsibility for the websites and can make no guarantee that a site will
remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the
publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website: politybooks.com
https://lccn.loc.gov/2018059981
http://politybooks.com
Dedication
All my life I’ve prided myself on being a survivor.
But surviving is just another loop …
Maeve Millay, Westworld1
I should constantly remind myself that the real leap
consists in introducing invention into existence …
In the world through which I travel,
I am endlessly creating myself …
I, the [hu]man of color, want only this:
That the tool never possess the [hu]man.
Black Skin, White Masks, Frantz Fanon2
Notes
1. Toye 2016.
2. Fanon 2008, p. 179.
Preface
I spent part of my childhood living with my grandma just off Crenshaw
Boulevard in Los Angeles. My school was on the same street as our
house, but I still spent many a day trying to coax kids on my block to
“play school” with me on my grandma’s huge concrete porch covered
with that faux-grass carpet. For the few who would come, I would
hand out little slips of paper and write math problems on a small
chalkboard until someone would insist that we go play tag or hide-
and-seek instead. Needless to say, I didn’t have that many friends! But
I still have fond memories of growing up off Crenshaw surrounded by
people who took a genuine interest in one another’s well-being and
who, to this day, I can feel cheering me on as I continue to play school.
Some of my most vivid memories of growing up also involve the police.
Looking out of the backseat window of the car as we passed the
playground fence, boys lined up for police pat-downs; or hearing the
nonstop rumble of police helicopters overhead, so close that the roof
would shake while we all tried to ignore it. Business as usual. Later, as
a young mom, anytime I went back to visit I would recall the
frustration of trying to keep the kids asleep with the sound and light
from the helicopter piercing the window’s thin pane. Like everyone
who lives in a heavily policed neighborhood, I grew up with a keen
sense of being watched. Family, friends, and neighbors – all of us
caught up in a carceral web, in which other people’s safety and
freedom are predicated on our containment.
Now, in the age of big data, many of us continue to be monitored and
measured, but without the audible rumble of helicopters to which we
can point. This doesn’t mean we no longer feel what it’s like to be a
problem. We do. This book is my attempt to shine light in the other
direction, to decode this subtle but no less hostile form of systemic
bias, the New Jim Code.
Introduction
The New Jim Code
Naming a child is serious business. And if you are not White in the
United States, there is much more to it than personal preference.
When my younger son was born I wanted to give him an Arabic name
to reflect part of our family heritage. But it was not long after 9/11, so
of course I hesitated. I already knew he would be profiled as a Black
youth and adult, so, like most Black mothers, I had already started
mentally sparring those who would try to harm my child, even before
he was born. Did I really want to add another round to the fight? Well,
the fact is, I am also very stubborn. If you tell me I should not do
something, I take that as a dare. So I gave the child an Arabic first and
middle name and noted on his birth announcement: “This guarantees
he will be flagged anytime he tries to fly.”
If you think I am being hyperbolic, keep in mind that names are
racially coded. While they are one of the everyday tools we use to
express individuality and connections, they are also markers
interacting with numerous technologies, like airport screening systems
and police risk assessments, as forms of data. Depending on one’s
name, one is more likely to be detained by state actors in the name of
“public safety.”
Just as in naming a child, there are many everyday contexts – such as
applying for jobs, or shopping – that employ emerging technologies,
often to the detriment of those who are racially marked. This book
explores how such technologies, which often pose as objective,
scientific, or progressive, too often reinforce racism and other forms of
inequity. Together, we will work to decode the powerful assumptions
and values embedded in the material and digital architecture of our
world. And we will be stubborn in our pursuit of a more just and
equitable approach to tech – ignoring the voice in our head that says,
“No way!” “Impossible!” “Not realistic!” But as activist and educator
Mariame Kaba contends, “hope is a discipline.”1 Reality is something
we create together, except that so few people have a genuine say in the
world in which they are forced to live. Amid so much suffering and
injustice, we cannot resign ourselves to this reality we have inherited.
It is time to reimagine what is possible. So let’s get to work.
Everyday Coding
Each year I teach an undergraduate course on race and racism and I
typically begin the class with an exercise designed to help me get to
know the students while introducing the themes we will wrestle with
during the semester. What’s in a name? Your family story, your
religion, your nationality, your gender identity, your race and
ethnicity? What assumptions do you think people make about you on
the basis of your name? What about your nicknames – are they chosen
or imposed? From intimate patterns in dating and romance to large-
scale employment trends, our names can open and shut doors. Like a
welcome sign inviting people in or a scary mask repelling and pushing
them away, this thing that is most ours is also out of our hands.
The popular book and Netflix documentary Freakonomics describe
the process of parents naming their kids as an exercise in branding,
positioning children as more or less valuable in a competitive social
marketplace. If we are the product, our names are the billboard – a
symptom of a larger neoliberal rationale that subsumes all other
sociopolitical priorities to “economic growth, competitive positioning,
and capital enhancement.”2 My students invariably chuckle when the
“baby-naming expert” comes on the screen to help parents “launch”
their newest offspring. But the fact remains that naming is serious
business. The stakes are high not only because parents’ decisions will
follow their children for a lifetime, but also because names reflect
much longer histories of conflict and assimilation and signal fierce
political struggles – as when US immigrants from Eastern Europe
anglicize their names, or African Americans at the height of the Black
Power movement took Arabic or African names to oppose White
supremacy.
I will admit, something that irks me about conversations regarding
naming trends is how distinctly African American names are set apart
as comically “made up” – a pattern continued in Freakonomics. This
tendency, as I point out to students, is a symptom of the chronic anti-
Blackness that pervades even attempts to “celebrate difference.”
Blackness is routinely conflated with cultural deficiency, poverty, and
pathology … Oh, those poor Black mothers, look at how they misspell
“Uneeq.” Not only does this this reek of classism, but it also harbors a
willful disregard for the fact that everyone’s names were at one point
made up!3
Usually, many of my White students assume that the naming exercise
is not about them. “I just have a normal name,” “I was named after my
granddad,” “I don’t have an interesting story, prof.” But the presumed
blandness of White American culture is a crucial part of our national
narrative. Scholars describe the power of this plainness as the invisible
“center” against which everything else is compared and as the “norm”
against which everyone else is measured. Upon further reflection,
what appears to be an absence in terms of being “cultureless” works
more like a superpower. Invisibility, with regard to Whiteness, offers
immunity. To be unmarked by race allows you to reap the benefits but
escape responsibility for your role in an unjust system. Just check out
the hashtag #CrimingWhileWhite to read the stories of people who are
clearly aware that their Whiteness works for them like an armor and a
force field when dealing with the police. A “normal” name is just one of
many tools that reinforce racial invisibility.
As a class, then, we begin to understand that all those things dubbed
“just ordinary” are also cultural, as they embody values, beliefs, and
narratives, and normal names offer some of the most powerful stories
of all. If names are social codes that we use to make everyday
assessments about people, they are not neutral but racialized,
gendered, and classed in predictable ways. Whether in the time of
Moses, Malcolm X, or Missy Elliot, names have never grown on trees.
They are concocted in cultural laboratories and encoded and infused
with meaning and experience – particular histories, longings, and
anxieties. And some people, by virtue of their social position, are given
more license to experiment with unique names. Basically, status
confers cultural value that engenders status, in an ongoing cycle of
social reproduction.4
In a classic study of how names impact people’s experience on the job
market, researchers show that, all other things being equal, job seekers
with White-sounding first names received 50 percent more callbacks
from employers than job seekers with Black-sounding names.5 They
calculated that the racial gap was equivalent to eight years of relevant
work experience, which White applicants did not actually have; and
the gap persisted across occupations, industry, employer size – even
when employers included the “equal opportunity” clause in their ads.6
With emerging technologies we might assume that racial bias will be
more scientifically rooted out. Yet, rather than challenging or
overcoming the cycles of inequity, technical fixes too often reinforce
and even deepen the status quo. For example, a study by a team of
computer scientists at Princeton examined whether a popular
algorithm, trained on human writing online, would exhibit the same
biased tendencies that psychologists have documented among
humans. They found that the algorithm associated White-sounding
names with “pleasant” words and Black-sounding names with
“unpleasant” ones.7
Such findings demonstrate what I call “the New Jim Code”: the
employment of new technologies that reflect and reproduce existing
inequities but that are promoted and perceived as more objective or
progressive than the discriminatory systems of a previous era.8 Like
other kinds of codes that we think of as neutral, “normal” names have
power by virtue of their perceived neutrality. They trigger stories about
what kind of person is behind the name – their personality and
potential, where they come from but also where they should go.
Codes are both reflective and predictive. They have a past and a future.
“Alice Tang” comes from a family that values education and is
expected to do well in math and science. “Tyrone Jackson” hails from a
neighborhood where survival trumps scholastics; and he is expected to
excel in sports. More than stereotypes, codes act as narratives, telling
us what to expect. As data scientist and Weapons of Math Destruction
author Cathy O’Neil observes, “[r]acism is the most slovenly of
predictive models. It is powered by haphazard data gathering and
spurious correlations, reinforced by institutional inequities, and
polluted by confirmation bias.”9
Racial codes are born from the goal of, and facilitate, social control.
For instance, in a recent audit of California’s gang database, not only
do Blacks and Latinxs constitute 87 percent of those listed, but many
of the names turned out to be babies under the age of 1, some of whom
were supposedly “self-described gang members.” So far, no one
ventures to explain how this could have happened, except by saying
that some combination of zip codes and racially coded names
constitute a risk.10 Once someone is added to the database, whether
they know they are listed or not, they undergo even more surveillance
and lose a number of rights.11
Most important, then, is the fact that, once something or someone is
coded, this can be hard to change. Think of all of the time and effort it
takes for a person to change her name legally. Or, going back to
California’s gang database: “Although federal regulations require that
people be removed from the database after five years, some records
were not scheduled to be removed for more than 100 years.”12 Yet
rigidity can also give rise to ingenuity. Think of the proliferation of
nicknames, an informal mechanism that allows us to work around
legal systems that try to fix us in place. We do not have to embrace the
status quo, even though we must still deal with the sometimes
dangerous consequences of being illegible, as when a transgender
person is “deadnamed” – called their birth name rather than chosen
name. Codes, in short, operate within powerful systems of meaning
that render some things visible, others invisible, and create a vast
array of distortions and dangers.
I share this exercise of how my students and I wrestle with the cultural
politics of naming because names are an expressive tool that helps us
think about the social and political dimensions of all sorts of
technologies explored in this book. From everyday apps to complex
algorithms, Race after Technology aims to cut through industry hype
to offer a field guide into the world of biased bots, altruistic
algorithms, and their many coded cousins. Far from coming upon a
sinister story of racist programmers scheming in the dark corners of
the web, we will find that the desire for objectivity, efficiency,
profitability, and progress fuels the pursuit of technical fixes across
many different social arenas. Oh, if only there were a way to slay
centuries of racial demons with a social justice bot! But, as we will
see, the road to inequity is paved with technical fixes.
Along the way, this book introduces conceptual tools to help us decode
the promises of tech with historically and sociologically informed
skepticism. I argue that tech fixes often hide, speed up, and even
deepen discrimination, while appearing to be neutral or benevolent
when compared to the racism of a previous era. This set of practices
that I call the New Jim Code encompasses a range of discriminatory
designs – some that explicitly work to amplify hierarchies, many that
ignore and thus replicate social divisions, and a number that aim to fix
racial bias but end up doing the opposite.
Importantly, the attempt to shroud racist systems under the cloak of
objectivity has been made before. In The Condemnation of Blackness,
historian Khalil Muhammad (2011) reveals how an earlier “racial data
revolution” in the nineteenth century marshalled science and statistics
to make a “disinterested” case for White superiority:
Racial knowledge that had been dominated by anecdotal,
hereditarian, and pseudo-biological theories of race would
gradually be transformed by new social scientific theories of race
and society and new tools of analysis, namely racial statistics and
social surveys. Out of the new methods and data sources, black
criminality would emerge, alongside disease and intelligence, as a
fundamental measure of black inferiority.13
You might be tempted to see the datafication of injustice in that era as
having been much worse than in the present, but I suggest we hold off
on easy distinctions because, as we shall see, the language of
“progress” is too easily weaponized against those who suffer most
under oppressive systems, however sanitized.
Readers are also likely to note how the term New Jim Code draws on
The New Jim Crow, Michelle Alexander’s (2012) book that makes a
case for how the US carceral system has produced a “new racial caste
system” by locking people into a stigmatized group through a
colorblind ideology, a way of labeling people as “criminals” that
permits legalized discrimination against them. To talk of the new Jim
Crow, begs the question: What of the old? “Jim Crow” was first
introduced as the title character of an 1832 minstrel show that mocked
and denigrated Black people. White people used it not only as a
derogatory epithet but also as a way to mark space, “legal and social
devices intended to separate, isolate, and subordinate Blacks.”14 And,
while it started as a folk concept, it was taken up as an academic
shorthand for legalized racial segregation, oppression, and injustice in
the US South between the 1890s and the 1950s. It has proven to be an
elastic term, used to describe an era, a geographic region, laws,
institutions, customs, and a code of behavior that upholds White
supremacy.15 Alexander compares the old with the new Jim Crow in a
number of ways, but most relevant for this discussion is her emphasis
on a shift from explicit racialization to a colorblind ideology that
masks the destruction wrought by the carceral system, severely
limiting the life chances of those labeled criminals who, by design, are
overwhelmingly Black. “Criminal,” in this era, is code for Black, but
also for poor, immigrant, second-class, disposable, unwanted, detritus.
What happens when this kind of cultural coding gets embedded into
the technical coding of software programs? In a now classic study,
computer scientist Latanya Sweeney examined how online search
results associated Black names with arrest records at a much higher
rate than White names, a phenomenon that she first noticed when
Google-searching her own name; and results suggested she had a
criminal record.16 The lesson? “Google’s algorithms were optimizing
for the racially discriminating patterns of past users who had clicked
on these ads, learning the racist preferences of some users and feeding
them back to everyone else.”17 In a technical sense, the writer James
Baldwin’s insight is prescient: “The great force of history comes from
the fact that we carry it within us, are unconsciously controlled by it in
many ways, and history is literally present in all that we do.”18 And
when these technical codes move beyond the bounds of the carceral
system, beyond labeling people as “high” and “low” risk criminals,
when automated systems from employment, education, healthcare,
and housing come to make decisions about people’s deservedness for
all kinds of opportunities, then tech designers are erecting a digital
caste system, structured by existing racial inequities that are not just
colorblind, as Alexander warns. These tech advances are sold as
morally superior because they purport to rise above human bias, even
though they could not exist without data produced through histories of
exclusion and discrimination.
In fact, as this book shows, colorblindness is no longer even a
prerequisite for the New Jim Code. In some cases, technology “sees”
racial difference, and this range of vision can involve seemingly
positive affirmations or celebrations of presumed cultural differences.
And yet we are told that how tech sees “difference” is a more objective
reflection of reality than if a mere human produced the same results.
Even with the plethora of visibly diverse imagery engendered and
circulated through technical advances, particularly social media, bias
enters through the backdoor of design optimization in which the
humans who create the algorithms are hidden from view.
Move Slower …
Problem solving is at the heart of tech. An algorithm, after all, is a set
of instructions, rules, and calculations designed to solve problems.
Data for Black Lives co-founder Yeshimabeit Milner reminds us that
“[t]he decision to make every Black life count as three-fifths of a
person was embedded in the electoral college, an algorithm that
continues to be the basis of our current democracy.”19 Thus, even just
deciding what problem needs solving requires a host of judgments;
and yet we are expected to pay no attention to the man behind the
screen.20
As danah boyd and M. C. Elish of the Data & Society Research
Institute posit, “[t]he datasets and models used in these systems are
not objective representations of reality. They are the culmination of
particular tools, people, and power structures that foreground one way
of seeing or judging over another.”21 By pulling back the curtain and
drawing attention to forms of coded inequity, not only do we become
more aware of the social dimensions of technology but we can work
together against the emergence of a digital caste system that relies on
our naivety when it comes to the neutrality of technology. This
problem extends beyond obvious forms of criminalization and
surveillance.22 It includes an elaborate social and technical apparatus
that governs all areas of life.
The animating force of the New Jim Code is that tech designers encode
judgments into technical systems but claim that the racist results of
their designs are entirely exterior to the encoding process. Racism thus
becomes doubled – magnified and buried under layers of digital
denial. There are bad actors in this arena that are easier to spot than
others. Facebook executives who denied and lied about their
knowledge of Russia’s interference in the 2016 presidential election via
social media are perpetrators of the most broadcast violation of public
trust to date.23 But the line between bad and “neutral” players is a
fuzzy one and there are many tech insiders hiding behind the language
of free speech, allowing racist and sexist harassment to run rampant in
the digital public square and looking the other way as avowedly bad
actors deliberately crash into others with reckless abandon.
For this reason, we should consider how private industry choices are
in fact public policy decisions. They are animated by political values
influenced strongly by libertarianism, which extols individual
autonomy and corporate freedom from government regulation.
However, a recent survey of the political views of 600 tech
entrepreneurs found that a majority of them favor higher taxes on the
rich, social benefits for the poor, single-payer healthcare,
environmental regulations, parental leave, immigration protections,
and other issues that align with Democratic causes. Yet most of them
also staunchly opposed labor unions and government regulation.24 As
one observer put it, “Silicon Valley entrepreneurs don’t mind the
government regulating other industries, but they prefer Washington to
stay out of their own business.”25 For example, while many say they
support single-payer healthcare in theory, they are also reluctant to
contribute to tax revenue that would fund such an undertaking. So
“political values” here is less about party affiliation or what people
believe in the abstract and more to do with how the decisions of tech
entrepreneurs impact questions of power, ethics, equity, and sociality.
In that light, I think the dominant ethos in this arena is best expressed
by Facebook’s original motto: “Move Fast and Break Things.” To
which we should ask: What about the people and places broken in the
process? Residents of Silicon Valley displaced by the spike in housing
costs, or Amazon warehouse workers compelled to skip bathroom
breaks and pee in bottles.26 “Move Fast, Break People, and Call It
Progress”?
“Data sharing,” for instance, sounds like a positive development,
streamlining the bulky bureaucracies of government so the public can
access goods and services faster. But access goes both ways. If
someone is marked “risky” in one arena, that stigma follows him
around much more efficiently, streamlining marginalization. A leading
Europe-based advocate for workers’ data rights described how she was
denied a bank loan despite having a high income and no debt, because
the lender had access to her health file, which showed that she had a
tumor.27 In the United States, data fusion centers are one of the most
pernicious sites of the New Jim Code, coordinating “data-sharing
among state and local police, intelligence agencies, and private
companies”28 and deepening what Stop LAPD Spying Coalition calls
the stalker state. Like other techy euphemisms, “fusion” recalls those
trendy restaurants where food looks like art. But the clientele of such
upscale eateries is rarely the target of data fusion centers that terrorize
the residents of many cities.
If private companies are creating public policies by other means, then
I think we should stop calling ourselves “users.” Users get used. We
are more like unwitting constituents who, by clicking submit, have
authorized tech giants to represent our interests. But there are
promising signs that the tide is turning.
According to a recent survey, a growing segment of the public (55
percent, up from 45 percent) wants more regulation of the tech
industry, saying that it does more to hurt democracy and free speech
than help.29 And company executives are admitting more
responsibility for safeguarding against hate speech and harassment on
their platforms. For example, Facebook hired thousands more people
on its safety and security team and is investing in automated tools to
spot toxic content. Following Russia’s disinformation campaign using
Facebook ads, the company is now “proactively finding and
suspending coordinated networks of accounts and pages aiming to
spread propaganda, and telling the world about it when it does. The
company has enlisted fact-checkers to help prevent fake news from
spreading as broadly as it once did.”30
In November 2018, Zuckerberg held a press call to announce the
formation of a “new independent body” that users could turn to if they
wanted to appeal a decision made to take down their content. But
many observers criticize these attempts to address public concerns as
not fully reckoning with the political dimensions of the company’s
private decisions. Reporter Kevin Roose summarizes this governance
behind closed doors:
Shorter version of this call: Facebook is starting a judicial branch
to handle the overflow for its executive branch, which is also its
legislative branch, also the whole thing is a monarchy.31
The co-director of the AI Now Research Institute, Kate Crawford,
probes further:
Will Facebook’s new Supreme Court just be in the US? Or one for
every country where they operate? Which norms and laws rule?
Do execs get to overrule the decisions? Finally, why stop at user
content? Why not independent oversight of the whole system?”32
The “ruthless code of secrecy” that enshrouds Silicon Valley is one of
the major factors fueling public distrust.33 So, too, is the rabid appetite
of big tech to consume all in its path, digital and physical real estate
alike. “There is so much of life that remains undisrupted.” As one
longtime tech consultant to companies including Apple, IBM, and
Microsoft put it, “For all intents and purposes, we’re only 35 years into
a 75-or 80-year process of moving from analog to digital. The image of
Silicon Valley as Nirvana has certainly taken a hit, but the reality is
that we the consumers are constantly voting for them.”34 The fact is,
the stakes are too high, the harms too widespread, the incentives too
enticing, for the public to accept the tech industry’s attempts at self-
regulation.
It is revealing, in my view, that many tech insiders choose a more
judicious approach to tech when it comes to raising their own kids.35
There are reports of Silicon Valley parents requiring nannies to sign
“no-phone contracts”36 and opting to send their children to schools in
which devices are banned or introduced slowly, in favor of “pencils,
paper, blackboards, and craft materials.”37 Move Slower and Protect
People? All the while I attend education conferences around the
country in which vendors fill massive expo halls to sell educators the
latest products couched in a concern that all students deserve access –
yet the most privileged refuse it? Those afforded the luxury of opting
out are concerned with tech addiction – “On the scale between candy
and crack cocaine, it’s closer to crack cocaine,” one CEO said of
screens.38 Many are also wary about the lack of data privacy, because
access goes both ways with apps and websites that track users’
information.
In fact the author of The Art of Computer Programming, the field’s
bible (and some call Knuth himself “the Yoda of Silicon Valley”),
recently commented that he feels “algorithms are getting too
prominent in the world. It started out that computer scientists were
worried nobody was listening to us. Now I’m worried that too many
people are listening.”39 To the extent that social elites are able to
exercise more control in this arena (at least for now), they also
position themselves as digital elites within a hierarchy that allows
some modicum of informed refusal at the very top. For the rest of us,
nanny contracts and Waldorf tuition are not an option, which is why
the notion of a personal right to refuse privately is not a tenable
solution.40
The New Jim Code will not be thwarted by simply revising user
agreements, as most companies attempted to do in the days following
Zuckerberg’s 2018 congressional testimony. And more and more
young people seem to know that, as when Brooklyn students staged a
walkout to protest a Facebook-designed online program, saying that
“it forces them to stare at computers for hours and ‘teach ourselves,’”
guaranteeing only 10–15 minutes of “mentoring” each week!41 In fact
these students have a lot to teach us about refusing tech fixes for
complex social problems that come packaged in catchphrases like
“personalized learning.”42 They are sick and tired of being atomized
and quantified, of having their personal uniqueness sold to them, one
“tailored” experience after another. They’re not buying it. Coded
inequity, in short, can be met with collective defiance, with resisting
the allure of (depersonalized) personalization and asserting, in this
case, the sociality of learning. This kind of defiance calls into question
a libertarian ethos that assumes what we all really want is to be left
alone, screen in hand, staring at reflections of ourselves. Social
theorist Karl Marx might call tech personalization our era’s opium of
the masses and encourage us to “just say no,” though he might also
point out that not everyone is in an equal position to refuse, owing to
existing forms of stratification. Move slower and empower people.
Tailoring: Targeting
In examining how different forms of coded inequity take shape, this
text presents a case for understanding race itself as a kind of tool – one
designed to stratify and sanctify social injustice as part of the
architecture of everyday life. In this way, this book challenges us to
question not only the technologies we are sold, but also the ones we
manufacture ourselves. For most of US history, White Americans have
used race as a tool to denigrate, endanger, and exploit non-White
people – openly, explicitly, and without shying away from the deadly
demarcations that racial imagination brings to life. And, while overt
White supremacy is proudly reasserting itself with the election of
Donald Trump in 2016, much of this is newly cloaked in the language
of White victimization and false equivalency. What about a White
history month? White studies programs? White student unions? No
longer content with the power of invisibility, a vocal subset of the
population wants to be recognized and celebrated as White – a
backlash against the civil rights gains of the mid-twentieth century, the
election of the country’s first Black president, diverse representations
in popular culture, and, more fundamentally, a refusal to comprehend
that, as Baldwin put it, “white is a metaphor for power,” unlike any
other color in the rainbow.43
The dominant shift toward multiculturalism has been marked by a
move away from one-size-fits-all mass marketing toward ethnically
tailored niches that capitalize on calls for diversity. For example, the
Netflix movie recommendations that pop up on your screen can entice
Black viewers, by using tailored movie posters of Black supporting cast
members, to get you to click on an option that you might otherwise
pass on.44 Why bother with broader structural changes in casting and
media representation, when marketing gurus can make Black actors
appear more visible than they really are in the actual film? It may be
that the hashtag #OscarsSoWhite drew attention to the overwhelming
Whiteness of the Academy Awards, but, so long as algorithms become
more tailored, the public will be given the illusion of progress.45
Importantly, Netflix and other platforms that thrive on tailored
marketing do not need to ask viewers about their race, because they
use prior viewing and search histories as proxies that help them
predict who will be attracted to differently cast movie posters.
Economic recognition is a ready but inadequate proxy for political
representation and social power. This transactional model of
citizenship presumes that people’s primary value hinges on the ability
to spend money and, in the digital age, expend attention … browsing,
clicking, buying. This helps explain why different attempts to opt out
of tech-mediated life can itself become criminalized, as it threatens the
digital order of things. Analog is antisocial, with emphasis on anti …
“what are you trying to hide?”
Meanwhile, multiculturalism’s proponents are usually not interested
in facing White supremacy head on. Sure, movies like Crazy Rich
Asians and TV shows like Black-ish, Fresh off the Boat, and The
Goldbergs do more than target their particular demographics; at
times, they offer incisive commentary on the racial–ethnic dynamics
of everyday life, drawing viewers of all backgrounds into their stories.
Then there is the steady stream of hits coming out of Shondaland that
deliberately buck the Hollywood penchant for typecasting. In response
to questions about her approach to shows like Grey’s Anatomy and
Scandal, Shonda Rhimes says she is not trying to diversify television
but to normalize it: “Women, people of color, LGBTQ people equal
WAY more than 50 percent of the population. Which means it ain’t out
of the ordinary. I am making the world of television look NORMAL.”46
But, whether TV or tech, cosmetic diversity too easily stands in for
substantive change, with a focus on feel-good differences like food,
language, and dress, not on systemic disadvantages associated with
employment, education, and policing. Celebrating diversity, in this
way, usually avoids sober truth-telling so as not to ruin the party. Who
needs to bother with race or sex disparities in the workplace, when
companies can capitalize on stereotypical differences between groups?
The company BIC came out with a line of “BICs For Her” pens that
were not only pink, small, and bejeweled, but priced higher than the
non-gendered ones. Criticism was swift. Even Business Insider, not
exactly known as a feminist news outlet, chimed in: “Finally, there’s a
lady’s pen that makes it possible for the gentler sex to write on pink,
scented paper: Bic for Her. Remember to dot your i’s with hearts or
smiley faces, girls!” Online reviewers were equally fierce and funny:
Finally! For years I’ve had to rely on pencils, or at worst, a twig
and some drops of my feminine blood to write down recipes (the
only thing a lady should be writing ever) … I had despaired of ever
being able to write down said recipes in a permanent manner,
though my men-folk assured me that I “shouldn’t worry yer pretty
little head.” But, AT LAST! Bic, the great liberator, has released a
womanly pen that my gentle baby hands can use without fear of
unlady-like callouses and bruises. Thank you, Bic!47
No, thank you, anonymous reviewers! But the last I checked, ladies’
pens are still available for purchase at a friendly online retailer near
you, though packaging now includes a nod to “breast cancer
awareness,” or what is called pinkwashing – the co-optation of breast
cancer to sell products or provide cover for questionable political
campaigns.48
Critics launched a similar online campaign against an IBM initiative
called Hack a Hair Dryer. In the company’s efforts to encourage girls
to enter STEM professions, they relied on tired stereotypes of girls and
women as uniquely preoccupied with appearance and grooming:
Sorry @IBM i’m too busy working on lipstick chemistry and
writing down formula with little hearts over the i s to
#HackAHairDryer”49
Niche marketing, in other words, has a serious downside when
tailoring morphs into targeting and stereotypical containment. Despite
decades of scholarship on the social fabrication of group identity, tech
developers, like their marketing counterparts, are encoding race,
ethnicity, and gender as immutable characteristics that can be
measured, bought, and sold. Vows of colorblindness are not necessary
to shield coded inequity if we believe that scientifically calculated
differences are somehow superior to crude human bias.
Consider this ad for ethnicity recognition software developed by a
Russian company, NTech Lab – which beats Google’s Facenet as the
world’s best system for recognition, with 73.3 percent accuracy on 1
million faces (Figure 0.1).50 NTech explains that its algorithm has
“practical applications in retail, healthcare, entertainment and other
industries by delivering accurate and timely demographic data to
enhance the quality of service”; this includes targeted marketing
campaigns and more.51
What N-Tech does not mention is that this technology is especially
useful to law enforcement and immigration officials and can even be
used at mass sporting and cultural events to monitor streaming video
feed.52 This shows how multicultural representation, marketed as an
individualistic and fun experience, can quickly turn into criminalizing
misrepresentation. While some companies such as NTech are already
being adopted for purposes of policing, other companies, for example
“Diversity Inc,” which I will introduce in the next chapter, are squarely
in the ethnic marketing business, and some are even developing
techniques to try to bypass human bias. What accounts for this
proliferation of racial codification?
Figure 0.1 N-Tech Lab, Ethnicity Recognition
Source: Twitter @mor10, May 12, 2018, 5:46 p.m.
Why Now?
Today the glaring gap between egalitarian principles and inequitable
practices is filled with subtler forms of discrimination that give the
illusion of progress and neutrality, even as coded inequity makes it
easier and faster to produce racist outcomes. Notice that I said
outcomes and not beliefs, because it is important for us to assess how
technology can reinforce bias by what it does, regardless of marketing
or intention. But first we should acknowledge that intentional and
targeted forms of White supremacy abound!
As sociologist Jessie Daniels documents, White nationalists have
ridden the digital wave with great success. They are especially fond of
Twitter and use it to spread their message, grow their network,
disguise themselves online, and generate harassment campaigns that
target people of color, especially Black women.53 Not only does the
design of such platforms enable the “gamification of hate” by placing
the burden on individual users to report harassers; Twitter’s relatively
hands-off approach when it comes to the often violent and hate-filled
content of White supremacists actually benefits the company’s bottom
line.
This is a business model in which more traffic equals more profit, even
if that traffic involves violently crashing into other users – as when
Ghostbusters star Leslie Jones received constant threats of rape and
lynching after noted White supremacist Milo Yiannopoulos rallied a
digital mob against her: a high-profile example of the macro-
aggressions that many Black women experience on social media every
day.54 In Daniels’ words, “[s]imply put, White supremacists love
Twitter because Twitter loves them back.”55 Jones for her part reached
out to her friend, Twitter’s CEO Jack Dorsey; and Dorsey is now
considering artificial intelligence (AI) of the kind used on Instagram to
identify hate speech and harassment.56
And, while the use of social media to amplify and spread obvious
forms of racial hatred is an ongoing problem that requires systematic
interventions, it is also the most straightforward to decode, literally.
For example, White supremacists routinely embed seemingly benign
symbols in online content, cartoon characters or hand signs, that
disseminate and normalize their propaganda. However, these are only
the most visible forms of coded inequity in which we can identify the
intentions of self-proclaimed racists. The danger, as I see it, is when
we allow these more obvious forms of virulent racism to monopolize
our attention, when the equivalent of slow death – the subtler and
even alluring forms of coded inequity – get a pass. My book hopes to
focus more of our attention on this New Jim Code.
Today explicitly racist laws are no longer on the books, yet racism
continues in many areas of life as a result of a vast carceral apparatus
that facilitates legal discrimination against those “marked” with a
criminal record. So, while Black people in the abstract enjoy greater
freedom of movement, in practice many are immobilized by an
elaborate penal system. Not only those who are convicted, but entire
families and communities are stigmatized and penalized by
association – they carry a badge of dishonor with widespread
consequences, such as restrictions on where people can live, work, and
move around.57 This is the paradox Michelle Alexander documents:
the legalized discrimination afforded by the US penal system at a time
when de jure segregation is no longer acceptable. Thanks to the work
of Alexander and many others, social awareness about the carceral
system is growing and people are looking for “more humane”
alternatives, such as ankle monitors, and “more objective” measures,
such as crime prediction software, to decide who should be caged and
for how long. As widespread concern over mass incarceration
increases, people are turning to technological fixes that encode
inequity in a different form.
Growing exposure of social problems is fueling new forms of
obfuscation. For instance, public discourse is filled with frequent and
widespread condemnation of blatant acts of racism, albeit often
euphemized through the language of “racial incidents.” No longer
limited to television or newspapers, condemnation on social media
makes the practice of “dragging” people through the virtual public
square easier and swifter. Viral hashtags and memes allow almost
anyone to publicize racist transgressions, sometimes as they are
happening, with the potential for news to spread globally in a matter
of minutes. Dragging can be entertaining, and it is profitable for
corporations by driving up clicks; but it is also cathartic for those who
previously had their experiences of racism questioned or dismissed. It
offers a collective ritual, which acknowledges and exposes the
everyday insults and dangers that are an ongoing part of Black life.
Video recordings, in particular, position viewers as witnesses whose
judgment may have political and professional repercussions for those
whose blatant racist actions are on view.
For example, in the spring of 2018, the TV network ABC cancelled the
revival of the sitcom Roseanne, after the show’s eponymous lead
actress, Roseanne Barr, tweeted a series of racist messages ending
with one that directed racially coded slurs at Valerie Jarrett, former
advisor to Barack Obama. Hashtags like #CancelRoseanne operate like
a virtual public square in which response to racial insults are offered
and debated. Memes, too, are an effective tool for dragging racism.
One of the most creative and comedic depicts a White woman at
Oakland’s Lake Merritt who called the police on a Black man who was
barbecuing with the “wrong” type of grill. BBQBecky’s image from the
video recording has been cut and pasted at the scene of many “crimes”
– she is depicted calling the police on the 1963 March on Washington,
on Rosa Parks sitting on the bus, on Michelle and Barack Obama
getting sworn into office, and even on the Black Panther as he greets
cheering crowds at the Wakanda waterfalls – among many other faux
offenses.
In a context in which people are able to voice their discontent and
expose the absurdity of everyday insults, the pervasiveness of race talk
can serve as a proxy for more far-reaching social progress.
Paradoxically, as platforms like Twitter, Instagram, and YouTube give
more opportunities to put blatant acts of racism on trial, many of these
same companies encode more insidious forms of inequity in the very
design of their products and services. By drawing our attention to
Roseanne-like slurs or BBQBecky-like citizen policing, dragging may
obscure how the New Jim Code operates behind the scenes.
Similarly, the hypervisibility of Black celebrities, athletes, and
politicians can mask the widespread disenfranchisement of Black
communities through de facto segregation and the punishment
apparatus. How can a society filled with millions of people cheering for
LeBron, singing along to Beyoncé, tuning in to Oprah, and pining for
the presidency of Obama be … racist? But alas, “Black faces in high
places” is not an aberration but a key feature of a society structured by
White supremacy.58 In hindsight, we would not point to the
prominence of Black performers and politicians in the early twentieth
century as a sign that racism was on the decline. But it is common to
hear that line of reasoning today.
Tokenism is not simply a distraction from systemic domination. Black
celebrities are sometimes recruited to be the (Black) face of
technologies that have the potential to deepen racial inequities. For
example, in 2018 Microsoft launched a campaign featuring the rapper
Common to promote AI:
Today, right now, you have more power at your fingertips than
entire generations that came before you. Think about that. That’s
what technology really is. It’s possibility. It’s adaptability. It’s
capability. But in the end it’s only a tool. What’s a hammer
without a person who swings it? It’s not about what technology
can do, it’s about what you can do with it. You’re the voice, and it’s
the microphone. When you’re the artist, it’s the paintbrush. We
are living in the future we always dreamed of … AI empowering us
to change the world we see … So here’s the question: What will
you do with it?59
Savvy marketing on the part of Microsoft, for sure. What better
aesthetic than a Black hip-hop artist to represent AI as empowering,
forward-thinking, cool – the antithesis of anti-Black discrimination?
Not to mention that, as an art form, hip-hop has long pushed the
boundaries of technological experimentation through beatboxing,
deejaying, sampling, and more. One could imagine corporate-
sponsored rap battles between artists and AI coming to a platform
near you. The democratizing ethos of Common’s narration positions
the listener as a protagonist in a world of AI, one whose voice can
direct the development of this tool even though rarely a day goes by
without some report on biased bots. So what is happening behind the
screens?
A former Apple employee who noted that he was “not Black or
Hispanic” described his experience on a team that was developing
speech recognition for Siri, the virtual assistant program. As they
worked on different English dialects – Australian, Singaporean, and
Indian English – he asked his boss: “What about African American
English?” To this his boss responded: “Well, Apple products are for the
premium market.” And this happened in 2015, “one year after [the
rapper] Dr. Dre sold Beats by Dr. Dre to Apple for a billion dollars.”
The irony, the former employee seemed to imply, was that the
company could somehow devalue and value Blackness at the same
time.60 It is one thing to capitalize on the coolness of a Black artist to
sell (overpriced) products and quite another to engage the cultural
specificity of Black people enough to enhance the underlying design of
a widely used technology. This is why the notion that tech bias is
“unintentional” or “unconscious” obscures the reality – that there is no
way to create something without some intention and intended user in
mind (a point I will return to in the next chapter).
For now, the Siri example helps to highlight how just having a more
diverse team is an inadequate solution to discriminatory design
practices that grow out of the interplay of racism and capitalism. Jason
Mars, a Black computer scientist, expressed his frustration saying,
“There’s a kind of pressure to conform to the prejudices of the world …
It would be interesting to have a black guy talk [as the voice for his
app], but we don’t want to create friction, either. First we need to sell
products.”61 How does the fist-pumping empowerment of Microsoft’s
campaign figure in a world in which the voices of Black programmers
like Mars are treated as conflict-inducing? Who gets muted in this
brave new world? The view that “technology is a neutral tool” ignores
how race also functions like a tool, structuring whose literal voice gets
embodied in AI. In celebrating diversity, tokenistic approaches to tech
development fail to acknowledge how the White aesthetic colors AI.
The “blandness” of Whiteness that some of my students brought up
when discussing their names is treated by programmers as normal,
universal, and appealing. The invisible power of Whiteness means that
even a Black computer scientist running his own company who
earnestly wants to encode a different voice into his app is still hemmed
in by the desire of many people for White-sounding voices.
So, as we work to understand the New Jim Code, it is important to
look beyond marketing rhetoric to the realities of selling and targeting
diversity. One of the companies, Diversity, Inc., which I will discuss in
more detail in Chapter 1, creates software that helps other companies
and organizations tailor marketing campaigns to different ethnic
groups. In the process it delineates over 150 distinct ethnicities and
“builds” new ones for companies and organizations that want to
market their goods or services to a subgroup not already represented
in the Diversity, Inc. database. Technologies do not just reflect racial
fault lines but can be used to reconstruct and repackage social
groupings in ways that seem to celebrate difference. But would you
consider this laudable or exploitative, opportunistic or oppressive?
And who ultimately profits from the proliferation of ethnically tailored
marketing? These are questions we will continue to wrestle with in the
pages ahead.
Finally, the New Jim Code is part of a broader push toward
privatization where efforts to cut costs and maximize profits, often at
the expense of other human needs, is a guiding rationale for public
and private sectors alike.62 Computational approaches to a wide array
of problems are seen as not only good but necessary, and a key feature
of cost-cutting measures is the outsourcing of decisions to “smart”
machines. Whether deciding which teacher to hire or fire or which
loan applicant to approve or decline, automated systems are alluring
because they seem to remove the burden from gatekeepers, who may
be too overworked or too biased to make sound judgments. Profit
maximization, in short, is rebranded as bias minimization.
But the outsourcing of human decisions is, at once, the insourcing of
coded inequity. As philosopher and sociologist Herbert Marcuse
remarked, “[t]echnological rationality has become political
rationality.” Considering Marcuse’s point, as people become more
attuned to racial biases in hiring, firing, loaning, policing, and a whole
host of consequential decisions – an awareness we might take to be a
sign of social progress – this very process also operates as a kind of
opportunity for those who seek to manage social life more efficiently.
The potential for bias creates a demand for more efficient and
automated organizational practices, such as the employment screening
carried out by AI – an example we will explore in more depth.
Important to this story is the fact that power operates at the level of
institutions and individuals – our political and mental structures –
shaping citizen-subjects who prioritize efficiency over equity.
It is certainly the case that algorithmic discrimination is only one facet
of a much wider phenomenon, in which what it means to be human is
called into question. What do “free will” and “autonomy” mean in a
world in which algorithms are tracking, predicting, and persuading us
at every turn? Historian Yuval Noah Harari warns that tech knows us
better than we know ourselves, and that “we are facing not just a
technological crisis but a philosophical crisis.”63 This is an industry
with access to data and capital that exceeds that of sovereign nations,
throwing even that sovereignty into question when such technologies
draw upon the science of persuasion to track, addict, and manipulate
the public. We are talking about a redefinition of human identity,
autonomy, core constitutional rights, and democratic principles more
broadly.64
In this context, one could argue that the racial dimensions of the
problem are a subplot of (even a distraction from) the main action of
humanity at risk. But, as philosopher Sylvia Wynter has argued, our
very notion of what it means to be human is fragmented by race and
other axes of difference. She posits that there are different “genres” of
humanity that include “full humans, not-quite humans, and
nonhumans,”65 through which racial, gendered, and colonial
hierarchies are encoded. The pseudo-universal version of humanity,
“the Man,” she argues, is only one form, and that it is predicated on
anti-Blackness. As such, Black humanity and freedom entail thinking
and acting beyond the dominant genre, which could include telling
different stories about the past, the present, and the future.66
But what does this have to do with coded inequity? First, it’s true, anti-
Black technologies do not necessarily limit their harm to those coded
Black.67 However, a universalizing lens may actually hide many of the
dangers of discriminatory design, because in many ways Black people
already live in the future.68 The plight of Black people has consistently
been a harbinger of wider processes – bankers using financial
technologies to prey on Black homeowners, law enforcement using
surveillance technologies to control Black neighborhoods, or
politicians using legislative techniques to disenfranchise Black voters
– which then get rolled out on an even wider scale. An
#AllLivesMatter approach to technology is not only false inclusion but
also poor planning, especially by those who fancy themselves as
futurists.
Many tech enthusiasts wax poetic about a posthuman world and,
indeed, the expansion of big data analytics, predictive algorithms, and
AI, animate digital dreams of living beyond the human mind and body
– even beyond human bias and racism. But posthumanist visions
assume that we have all had a chance to be human. How nice it must
be … to be so tired of living mortally that one dreams of immortality.
Like so many other “posts” (postracial, postcolonial, etc.),
posthumanism grows out of the Man’s experience. This means that, by
decoding the racial dimensions of technology and the way in which
different genres of humanity are constructed in the process, we gain a
keener sense of the architecture of power – and not simply as a top-
down story of powerful tech companies imposing coded inequity onto
an innocent public. This is also about how we (click) submit, because
of all that we seem to gain by having our choices and behaviors
tracked, predicted, and racialized. The director of research at
Diversity, Inc. put it to me like this: “Would you really want to see a
gun-toting White man in a Facebook ad?” Tailoring ads makes
economic sense for companies that try to appeal to people “like me”: a
Black woman whose sister-in-law was killed in a mass shooting, who
has had to “shelter in place” after a gunman opened fire in a
neighboring building minutes after I delivered a talk, and who worries
that her teenage sons may be assaulted by police or vigilantes. Fair
enough. Given these powerful associations, a gun-toting White man
would probably not be the best image for getting my business.
But there is a slippery slope between effective marketing and efficient
racism. The same sort of algorithmic filtering that ushers more
ethnically tailored representations into my feed can also redirect real
estate ads away from people “like me.” This filtering has been used to
show higher-paying job ads to men more often than to women, to
charge more for standardized test prep courses to people in areas with
a high density of Asian residents, and many other forms of coded
inequity. In cases of the second type especially, we observe how
geographic segregation animates the New Jim Code. While the gender
wage gap and the “race tax” (non-Whites being charged more for the
same services) are nothing new, the difference is that coded inequity
makes discrimination easier, faster, and even harder to challenge,
because there is not just a racist boss, banker, or shopkeeper to report.
Instead, the public must hold accountable the very platforms and
programmers that legally and often invisibly facilitate the New Jim
Code, even as we reckon with our desire for more “diversity and
inclusion” online and offline.
Taken together, all these features of the current era animate the New
Jim Code. While more institutions and people are outspoken against
blatant racism, discriminatory practices are becoming more deeply
embedded within the sociotechnical infrastructure of everyday life.
Likewise, the visibility of successful non-White individuals in almost
every social arena can obscure the reality of the systemic bias that still
affects many people. Finally, the proliferation of ever more
sophisticated ways to use ethnicity in marketing goods, services, and
even political messages generates more buy-in from those of us who
may not want to “build” an ethnicity but who are part of New Jim Code
architecture nevertheless.
The Anti-Black Box
Race after Technology integrates the tools of science and technology
studies (STS) and critical race studies to examine coded inequity and
our contemporary racial landscape. Taken together within the
framework of what I term race critical code studies, this approach
helps us open the Black box of coded inequity. “Black box” is a
metaphor commonly used in STS to describe how the social
production of science and technology is hidden from view. For
example, in The Black Box Society, legal scholar Frank Pasquale
(2014) interrogates the “secret algorithms” that are fundamental to
businesses, from Wall Street to Silicon Valley, and criticizes how the
law is used to aggressively protect commercial secrecy while ignoring
our right to privacy.69 His use of the term “Black box” draws on its
double meaning, as recording device and as mysterious object; and
here I recast this term to draw attention to the routine anti-Blackness
that inheres in so much tech development. What I call the anti-Black
box links the race-neutral technologies that encode inequity to the
race-neutral laws and policies that serve as powerful tools for White
supremacy.
An example is the Trump administration’s proposed “work for
welfare” policy, which imposes mandatory work requirements on
anyone who receives healthcare benefits through Medicaid.
Correction: not anyone. Some Republican-controlled states have
found a way to protect poor White Americans from the requirement by
instituting a waiver for people living in areas with a high
unemployment rate. Taken at face value, this looks like a fair
exception and seems to be race-neutral in that it could benefit poorer
Americans of all backgrounds. In practice, however, people living in
urban centers would not qualify because of their proximity to
wealthier suburbs, which pull the overall unemployment rate down for
the majority of Black urban residents.
Public policy, then, like popular discourse, is filled with racial coding.
Rural :: White and urban :: Black; so, without ever making race
explicit, state lawmakers are able to carve out an exception for their
White constituents. In a country as segregated as the United States,
geography is a reliable proxy for race. If zip codes are a relatively low-
tech device for instituting racism, how might we apply this insight to
computer codes? How do they reinforce racist norms and structures
without explicitly invoking race? And can we develop a race-conscious
orientation to emerging technology, not only as a mode of critique but
as a prerequisite for designing technology differently?
Race as Technology
This field guide explores not only how emerging technologies hide,
speed up, or reinforce racism, but also how race itself is a kind of
technology70 – one designed to separate, stratify, and sanctify the
many forms of injustice experienced by members of racialized groups,
but one that people routinely reimagine and redeploy to their own
ends.
Human toolmaking is not limited to the stone instruments of our early
ancestors or to the sleek gadgets produced by the modern tech
industry. Human cultures also create symbolic devices that structure
society. Race, to be sure, is one of our most powerful tools – developed
over hundreds of years, varying across time and place, codified in law
and refined through custom, and, tragically, still considered by many
people to reflect immutable differences between groups. For that
reason, throughout this book, we will consider not only how racial
logics enter the design of technology but how race itself operates as a
tool of vision and division with often deadly results.
Racism is, let us not forget, a means to reconcile contradictions. Only a
society that extolled “liberty for all” while holding millions of people in
bondage requires such a powerful ideology in order to build a nation
amid such a startling contradiction. How else could one declare “[w]e
hold these truths to be self-evident, that all men are created equal, that
they are endowed by their Creator with certain unalienable Rights,”
and at the same time deny these rights to a large portion of the
population71 – namely by claiming that its members, by virtue of their
presumed lack of humanity, were never even eligible for those rights?
72 Openly despotic societies, by contrast, are in no need of the
elaborate ideological apparatus that props up “free” societies.
Freedom, as the saying goes, ain’t free. But not everyone is required to
pay its steep price in equal measure. The same is true of the social
costs of technological progress.
Consider that the most iconic revolt “against machines,” as it is
commonly remembered, was staged by English textile workers, the
Luddites, in nineteenth-century England. Often remembered as people
who were out of touch and hated technology, the Luddites were
actually protesting the social costs of technological “progress” that the
working class was being forced to accept. “To break the machine was
in a sense to break the conversion of oneself into a machine for the
accumulating wealth of another,” according to cultural theorist Imani
Perry.73 At a recent conference titled “AI & Ethics,” the
communications director of a nonprofit AI research company, Jack
Clark, pointed out that, although the term “Luddite” is often used
today as a term of disparagement for anyone who is presumed to
oppose (or even question!) automation, the Luddite response was
actually directed at the manner in which machinery was rolled out,
without consideration for its negative impact on workers and society
overall. Perhaps the current era of technological transformation, Clark
suggested, warrants a similar sensibility – demanding a more careful
and democratic approach to technology.74
Shifting from nineteenth-century England to late twenty-first-century
Mexico, sci-fi filmmaker Alex Rivera wrestles with a similar
predicament of a near future in which workers are not simply
displaced but inhabited by technology. Sleep Dealer (2008) is set in a
dystopian world of corporate-controlled water, militarized drones,
“aqua-terrorists” (or water liberators, depending on your sympathies),
and a walled-off border between Mexico and the United States. The
main protagonist, Memo Cruz, and his co-workers plug networked
cables into nodes implanted in their bodies. This enables them to
operate robots on the other side of the border, giving the United States
what it always wanted: “all the work without the workers.”75
Such fictional accounts find their real-life counterpart in “electronic
sweatshops,” where companies such as Apple, HP, and Dell treat
humans like automata, reportedly requiring Chinese workers to
complete tasks every three seconds over a 12-hour period, without
speaking or using the bathroom.76 Indeed, as I write, over 1,000
workers at Amazon in Spain have initiated a strike over wages and
rights, following similar protests in Italy and Germany in 2017. If we
probe exploitative labor practices, the stated intention would likely
elicit buzzwords such as “lower costs” and “greater efficiency,”
signaling a fundamental tension and paradox – the indispensable
disposability of those whose labor enables innovation. The language of
intentionality only makes one side of this equation visible, namely the
desire to produce goods faster and cheaper, while giving people “the
opportunity to work.” This fails to account for the social costs of a
technology in which global forms of racism, caste, class, sex, and
gender exploitation are the nuts and bolts of development.77
“Racing” after technology, in this context, is about the pursuit of
efficiency, neutrality, Ready to Update, Install Now, I Agree, and about
what happens when we (click) submit too quickly.78 Whether it is in
the architecture of machines or in the implementation of laws, racial
logic imposes “race corrections” that distort our understanding of the
world.79 Consider the court decision in the case against one Mr. Henry
Davis, who was charged with destruction of property for bleeding on
police uniforms after officers incorrectly identified him as having an
outstanding warrant and then beat him into submission:
On and/or about the 20th day of September 20, 2009 at or near
222 S. Florissant within the corporate limits of Ferguson,
Missouri, the above-named defendant did then and there
unlawfully commit the offense of “property damage” to wit did
transfer blood to the uniform.80
When Davis sued the officers, the judge tossed out the case, saying: “a
reasonable officer could have believed that beating a subdued and
compliant Mr. Davis while causing a concussion, scalp lacerations, and
bruising with almost no permanent damage, did not violate the
Constitution.”81 The judge “race-corrected” our reading of the US
Constitution, making it inapplicable to the likes of Mr. Davis – a
reminder that, whatever else we think racism is, it is not simply
ignorance, or a not knowing. Until we come to grips with the
“reasonableness” of racism, we will continue to look for it on the
bloody floors of Charleston churches and in the dashboard cameras on
Texas highways, and overlook it in the smart-sounding logics of
textbooks, policy statements, court rulings, science journals, and
cutting-edge technologies.
Beyond Techno-Determinism
In the following chapters we will explore not only how racism is an
output of technologies gone wrong, but also how it is an input, part of
the social context of design processes. The mistaken view that society
is affected by but does not affect technological development is one
expression of a deterministic worldview. Headlines abound: “Is
Facebook Making Us Lonely?”;82 “Genetic Engineering Will Change
Everything Forever”;83 “Pentagon Video Warns of ‘Unavoidable’
Dystopian Future for World’s Biggest Cities.”84 In each, you can
observe the conventional relationship proffered between technology
and society. It is the view that such developments are inevitable, the
engine of human progress … or decline.
An extreme and rather mystical example of techno-determinism was
expressed by libertarian journalist Matt Ridley, who surmised that not
even basic science is essential, because innovation has a trajectory all
its own:
Technology seems to change by a sort of inexorable, evolutionary
progress, which we probably cannot stop – or speed up much
either … Increasingly, technology is developing the kind of
autonomy that hitherto characterized biological entities … The
implications of this new way of seeing technology – as an
autonomous, evolving entity that continues to progress whoever is
in charge – are startling. People are pawns in a process. We ride
rather than drive the innovation wave. Technology will find its
inventors, rather than vice versa.85
Whereas such hard determinists, like Ridley, posit that technology has
a mind of its own, soft determinists grant that it is at least possible for
people to make decisions about technology’s trajectory. However, they
still imagine a lag period in which society is playing catch-up,
adjusting its laws and norms to the latest invention. In this latter view,
technology is often depicted as neutral, or as a blank slate developed
outside political and social contexts, with the potential to be shaped
and governed through human action. But, as Manuel Castells argues,
“[t]he dilemma of technological determinism is probably a false
problem, since technology is society, and society cannot be understood
or represented without its technological tools.”86
Considering Castells’ point about the symbiotic relationship between
technology and society, this book employs a conceptual toolkit that
synthesizes scholarship from STS and critical race studies.
Surprisingly, these two fields of study are not often put into direct
conversation. STS scholarship opens wide the “Black box” that
typically conceals the inner workings of socio-technical systems, and
critical race studies interrogates the inner workings of sociolegal
systems. Using this hybrid approach, we observe not only that any
given social order is impacted by technological development, as
determinists would argue, but that social norms, ideologies, and
practices are a constitutive part of technical design.
Much of the early research and commentary on race and information
technologies coalesced around the idea of the “digital divide,” with a
focus on unequal access to computers and the Internet that falls along
predictable racial, class, and gender lines. And, while attention to
access is vital, especially given numerous socioeconomic activities that
involve using the Internet, the larger narrative of a techno-utopia in
which technology will necessarily benefit all undergird the “digital
divide” focus. Naively, access to computers and the Internet is posited
as a solution to inequality.87 And, to the extent that marginalized
groups are said to fear or lack an understanding of technology, the
“digital divide” framing reproduces culturally essentialist
understandings of inequality. A focus on technophobia and
technological illiteracy downplays the structural barriers to access, and
also ignores the many forms of tech engagement and innovation that
people of color engage in.
In fact, with the advent of mobile phones and wireless laptops, African
Americans and Latinxs are more active web users than White
people.88 Much of the African continent, in turn, is expected to
“leapfrog” past other regions, because it is not hampered by clunky
infrastructure associated with older technologies. In “The Revolution
Will Be Digitized: Afrocentricity and the Digital Public Sphere,” Anna
Everett critiques “the overwhelming characterizations of the brave
new world of cyberspace as primarily a racialized sphere of Whiteness”
that consigns Black people to the low-tech sphere – when they are
present at all.89 Other works effectively challenge the “digital divide”
framing by analyzing the racialized boundary constructed between
“low” and “high tech.”90 Likewise, Lisa Nakamura (2013) challenges
the model minority framing of Asian Americans as the “solution” to
the problem of race in a digital culture. She explains:
Different minorities have different functions in the cultural
landscape of digital technologies. They are good for different
kinds of ideological work … seeing Asians as the solution and
blacks as the problem [i.e. cybertyping] is and has always been a
drastic and damaging formulation which pits minorities against
each other …91
In contrast to critical race studies analyses of the dystopian digital
divide and cybertyping, another stream of criticism focuses on utopian
notions of a “race-free future” in which technologies would
purportedly render obsolete social differences that are divisive now.92
The idea that, “[o]n the Internet, nobody knows you’re a dog” (a line
from Peter Steiner’s famous 1993 New Yorker cartoon, featuring a
typing canine) exemplifies this vision. However, this idea relies on a
text-only web, which has been complicated by the rise of visual culture
on the Internet.93 For example, as already mentioned, Jessie Daniels
(2009) investigates the proliferation of White nationalist ideology and
communities online, unsettling any techno-utopian hopes for a
colorblind approach to social life in a digital era. And, as Alondra
Nelson shows, both the digital divide and the raceless utopia framings
posit race as a liability, as “either negligible or evidence of negligence,”
so that “racial identity, and blackness in particular, is the anti-avatar
of digital life.”94 It is also worth noting how, in both conceptions,
technology is imagined as impacting racial divisions – magnifying or
obliterating them – but racial ideologies do not seem to shape the
design of technology.
Race critical code studies would have us look at how race and racism
impact who has access to new devices, as well as how technologies are
produced in the first place. Two incisive works are particularly
relevant for thinking about the tension between innovation and
containment. In Algorithms of Oppression Safiya Noble (2018) argues
that the anti-Black and sexist Google search results – such as the
pornographic images that come up when you search for “Black girls” –
grow out of a “corporate logic of either willful neglect or a profit
imperative that makes money from racism and sexism,” as key
ingredients in the normative substrate of Silicon Valley. In a similar
vein, Simone Browne (2015), in Dark Matters: On the Surveillance of
Blackness, examines how surveillance technologies coproduce notions
of Blackness and explains that “surveillance is nothing new to black
folks”; from slave ships and slave patrols to airport security
checkpoints and stop-and-frisk policing practices, she points to the
“facticity of surveillance in black life.”95 Challenging a technologically
determinist approach, she argues that, instead of “seeing surveillance
as something inaugurated by new technologies,” to “see it as ongoing
is to insist that we factor in how racism and anti-Blackness undergird
and sustain the intersecting surveillances of our present order.”96 As
both Noble and Browne emphasize and as my book will expand upon,
anti-Black racism, whether in search results or in surveillance systems,
is not only a symptom or outcome, but a precondition for the
fabrication of such technologies.97
Race as technology: this is an invitation to consider racism in relation
to other forms of domination as not just an ideology or history, but as
a set of technologies that generate patterns of social relations, and
these become Black-boxed as natural, inevitable, automatic. As such,
this is also an invitation to refuse the illusion of inevitability in which
technologies of race come wrapped and to “hotwire” more habitable
forms of social organization in the process.98
Race critical code studies, as I develop it here, is defined not just by
what we study but also by how we analyze, questioning our own
assumptions about what is deemed high theory versus pop culture,
academic versus activist, evidence versus anecdote. The point is not
just to look beneath the surface in order to find connections between
these categories, but to pay closer attention to the surfaces themselves.
Here I draw upon the idea of thin description as a method for reading
surfaces – such as screens and skin – especially since a key feature of
being racialized is “to be encountered as a surface.”99 In
anthropologist John L. Jackson’s formulation, thin description is
“about how we all travel … through the thicket of time and space,
about the way … both of those trajectories might be constructively
thinned, theorized, concretized, or dislodged in service to questions
about how we relate to one another in a digital age.”100 He critiques
the worship of thick description within anthropology, arguing that it
“tries to pass itself off as more than it is, as embodying an expertise
that simulates (and maybe even surpasses) any of the ways in which
the people being studied might know themselves … one that would
pretend to see everything and, therefore, sometimes sees less than it
could.”101
Thinness, in this way, attempts a humble but no less ambitious
approach to knowledge production. Thinness allows greater elasticity,
engaging fields of thought and action too often disconnected. This
analytic flexibility, in my view, is an antidote to digital disconnection,
tracing links between individual and institutional, mundane and
spectacular, desirable and deadly in a way that troubles easy
distinctions.
At the same time, thin description is a method of respecting particular
kinds of boundaries. According to Jackson,
If thick description imagines itself able to amass more and more
factual information in service to stories about cultural difference,
“thin description” doesn’t fall into the trap of conceptualizing its
task as providing complete and total knowledge … So, there are
secrets you keep. That you treat very preciously. Names of
research subjects you share but many more you do not. There is
information veiled for the sake of story. For the sake of much
more.102
If the New Jim Code seeks to penetrate all areas of life, extracting data,
producing hierarchies, and predicting futures, thin description
exercises a much needed discretion, pushing back against the all-
knowing, extractive, monopolizing practices of coded inequity.
Thinness is not an analytic failure, but an acceptance of fragility … a
methodological counterpoint to the hubris that animates so much tech
development. What we know today about coded inequity may require
a complete rethinking, as social and technical systems change over
time. Let’s not forget: racism is a mercurial practice, shape-shifting,
adept at disguising itself in progressive-like rhetoric. If our thinking
becomes too weighed down by our own assuredness, we are likely to
miss the avant-garde stylings of NextGen Racism as it struts by.
Beyond Biased Bots
How do we move beyond the idea of biased bots, so we can begin to
understand a wide range of coded inequities? Here I propose four
dimensions to the New Jim Code: engineered inequity, default
discrimination, coded exposure, and technological benevolence; and I
will elaborate on them in the following chapters.
Chapter 1 takes a closer look at how engineered inequity explicitly
works to amplify social hierarchies that are based on race, class, and
gender and how the debate regarding “racist robots” is framed in
popular discourse. I conclude that robots can be racist, given their
design in a society structured by interlocking forms of domination.103
Chapter 2 looks at what happens when tech developers do not attend
to the social and historical context of their work and explores how
default discrimination grows out of design processes that ignore social
cleavages. I also consider how what is often depicted as glitches might
serve as powerful opportunities to examine the overall system, a
technological canary in the coal mine.
Chapter 3 examines the multiple forms of coded exposure that
technologies enable, from Polaroid cameras to computer software.
Here I think through the various form of visibility and of how, for
racialized groups, the problem of being watched (but not seen) relates
to newfangled forms of surveillance.
Chapter 4 explores how technological beneficence animates tech
products and services that offer fixes for social bias. Here I take a look
at technologies that explicitly work to address different forms of
discrimination, but that may still end up reproducing, or even
deepening, discriminatory processes because of the narrow way in
which “fairness” is defined and operationalized.
Finally, Chapter 5 examines how practitioners, scholars, activists,
artists, and students are working to resist and challenge the New Jim
Code – and how you, the reader, can contribute to an approach to
technology that moves beyond accessing new products, to advocating
for justice-oriented design practices.
Taken as a whole, the conceptual toolkit we build around a race critical
code studies will be useful, I hope, for analyzing a wide range of
phenomena – from the explicit codification of racial difference in
particular devices to the implicit assumption that technology is race-
neutral – through which Whiteness becomes the default setting for
tech development. This field guide critically interrogates the
progressive narratives that surround technology and encourages us to
examine how racism is often maintained or perpetuated through
technical fixes to social problems. And finally, the next chapters
examine the different facets of coded inequity with an eye toward
designing them differently. Are you ready?
Notes
1. Kaba describes “grounded hope” as a philosophy of living that must
be practiced every day and that it is different from optimism and
does not protect one from feeling sadness, frustration, or anger. See
her “Beyond Prisons” podcast, episode 19, at
https://shadowproof.com/2018/01/05/beyond-prisons-episode-
19-hope-is-a-discipline-feat-mariame-kaba.
2. Brown 2015, p. 26.
3. Inevitably, my students turn the question back on me: “Tell us about
your name, prof?” As I was born to an African American father and
a Persian Indian mother, my parents wanted me to have a first
name with Arabic origins, but one that was short enough, so
English speakers wouldn’t butcher it. They were mostly successful,
except that my friends still call me “Ru” … nicknames are a form of
endearment after all. What I find amusing these days is getting
messages addressed to “Mr. Benjamin” or “Mr. Ruha.” Since
Benjamin is more often used as a masculine first name, people
whom I have never met routinely switch the order in their heads
and mis-gender me as a result. I sometimes wonder whether I
receive some fleeting male privilege – more deference, perhaps.
This, after all, is the reason why some of my female students say
their parents gave them more gender-neutral names: to delay (if
not diminish) sexist assumptions about their qualifications and
capacities. Similar rationale for my Black, Asian, and Latinx
students with stereotypically White-sounding names: “My parents
didn’t want me to have a hard time,” “They wanted me to have a
normal American name” (where “American” is always coded
“White”).
4. The Apples and Norths of the world tend to experience less ridicule
and more fascination, owing to their celebrity parentage, which tell
us that there is nothing intrinsic to a “good” name, nothing that
makes for it.
5. So, is the solution for those with racially stigmatized names to code-
switch by adopting names that offer more currency on the job
market? Or does this simply accommodate bias and leave it in
place? In a number of informal experiments, job seekers put this
idea to the test. Jose Zamora dropped one letter from his first name
and found that “Joe Zamora,” with all the same education and
Beyond Prisons—Episode 19: Hope Is A Discipline feat. Mariame Kaba
credentials, magically started hearing from employers. Similarly,
after two years of searching for a job, Yolanda Spivey changed the
name on her résumé to “Bianca White,” and suddenly her inbox was
full of employers interested in interviewing her. What stunned
Yolanda most was that, while the same résumé was posted with her
real name on the employment website, employers were repeatedly
calling “Bianca,” desperate to get an interview.
6. When the study was replicated in France, another team found that
Christian-sounding names had a similar value over and above
Muslim-sounding names, and they could not explain the difference
through other factors such as experience or education.
7. Caliskan et al. 2017. Fun fact: did you know that the words
“algorithm” and “algebra” come from a Persian astronomer and
mathematician, Muhammad Ibn Musa al-Khwarizmi, whose last
name was Latinized as Algorithmi? I suspect, given how his name
would likely trigger surveillance systems today, he would cheer on
algorithmic audits that are trying to prevent such biased
associations!
8. I’m thinking of Browne’s (2015) “racializing surveillance,”
Broussard’s (2018) “technochauvinism,” Buolamwini’s (2016)
“coded gaze,” Eubanks’ (2018) “digital poorhouse,” Noble’s (2018)
“algorithms of oppression and technological redlining,” or Wachter-
Boettcher’s (2017) “algorithmic inequity” (among other kindred
formulations) as “cousin concepts” related to the New Jim Code.
9. O’Neil 2016, p. 23.
10. Another example is Wilmer Catalan-Ramirez, an undocumented
Chicago resident who was listed without his knowledge in the city’s
gang database as a member of two rival gangs (Saleh 2018).
11. See the CalGang Criminal Intelligence System report at
http://www.voiceofsandiego.org/wp-
content/uploads/2016/08/CalGangs-audit . See also Harvey
2016.
12. Harvey 2016.
http://www.voiceofsandiego.org/wp-content/uploads/2016/08/CalGangs-audit
13. Muhammad 2011, p. 20, emphasis added; see also Zuberi 2003.
14. Wacquant 2017, p. 2.
15. Wacquant 2017; emphasis added.
16. Sweeney 2013.
17. boyd and Elish 2018.
18. Baldwin 1998, p. 723.
19. In her letter to Zuckerberg, Milner (2018) continues:
“Histories of redlining, segregation, voter disenfranchisement and
state sanctioned violence have not disappeared, but have been
codified and disguised through new big data regimes.”
20. This refers to a classic line in the film Wizard of Oz in which Oz
attempts to conceal his machinations: “Pay no attention to the man
behind the curtain.”
21. boyd and Elish 2018.
22. Alexander 2018.
23. Frenkel et al. 2018.
24. Cohen 2017.
25. Gelin 2018.
26. Liao 2018.
27. Talk by Christina Colclough at the AI Ethics conference, March 10,
2018, Princeton University, sponsored by the Center for
Information Technology Policy and the University Center for
Human Values. See also http://www.thefutureworldofwork.org.
28. Monahan and Palmer 2009, p. 617.
29. Hart 2018.
30. Thompson and Lapowsky 2018.
31. Twitter @kevinroose, November 15, 2018, 3:33 p.m.
32. Twitter @katecrawford, November 15, 2018, 4:37 p.m.
33. Solon 2018.
34. Streitfeld 2019.
35. Weller 2017.
36. Lebowitz 2018.
37. Hoyle 2018.
38. John Lilly, a Silicon Valley-based venture capitalist, said: “he tries
to help his 13-year-old son understand that he is being manipulated
by those who built the technology. ‘I try to tell him somebody wrote
code to make you feel this way – I’m trying to help him understand
how things are made, the values that are going into things and what
people are doing to create that feeling,’ Mr. Lilly said” (Bowles
2018).
39. Roberts 2018. Data journalist Meredith Broussard calls this
“technochauvinism,” which she describes as the “belief that tech is
always the solution … Somehow, in the past two decades, many of
us began to assume that computers get it right and people get it
wrong” (Broussard 2018, p. 7–8).
40. See Bridges’ (2017) analysis of the “poverty of privacy rights.”
41. Edelman 2018.
42. Echoing the concerns of their Silicon Valley counterparts,
Brooklyn parents expressed worry about the “wealth of information
on each student, from age, ethnicity, and extracurricular activities,
to grades, test scores and disciplinary penalties” (Edelman 2018).
43. Baldwin and Kenan 2011, p. 158. See also DuBois (1935) on
Whiteness as a “public and psychological wage” for the White
working class, Roediger (2007) on the “wages of Whiteness,” and
Lewis (2004) on “hegemonic Whiteness”.
44. See https://www.wired.com/story/algorithms-netflix-tool-for-
justice/?BottomRelatedStories_Sections_2.
45. “#OscarsSoWhite also known as Oscars So White or Oscar
Whitewash, is a hashtag used to protest the underrepresentation of
people of color in the annual Academy Award nominations. The
hashtag came into use during the 2015 award cycle, and re-
appeared in 2016” (from
https://knowyourmeme.com/memes/oscars-so-white).
46. Williams 2015.
47. Sieczkowski 2012.
48. King 2006.
49. Cresci 2015.
50. N-Tech Lab 2015.
51. See https://ntechlab.com.
52. N-Tech Lab 2015; in fact, in April 2018 China made headlines for
apprehending a suspect at a concert with nearly 60,000 people in
attendance with the help of a similar program; see
https://www.washingtonpost.com/news/worldviews/wp/2018/04/13/china-
crime-facial-recognition-cameras-catch-suspect-at-concert-with-
60000-people.
53. In “The Algorithmic Rise of the ‘Alt-Right,’ Daniels writes: “There
are two strands of conventional wisdom unfolding in popular
accounts of the rise of the alt-right. One says that what’s really
happening can be attributed to a crisis in White identity: the alt-
right is simply a manifestation of the angry White male who has
status anxiety about his declining social power. Others contend that
the alt-right is an unfortunate eddy in the vast ocean of Internet
culture. Related to this is the idea that polarization, exacerbated by
filter bubbles, has facilitated the spread of Internet memes and fake
news promulgated by the alt-right. While the first explanation tends
https://www.wired.com/story/algorithms-netflix-tool-for-justice/?BottomRelatedStories_Sections_2
https://knowyourmeme.com/memes/oscars-so-white
https://www.washingtonpost.com/news/worldviews/wp/2018/04/13/china-crime-facial-recognition-cameras-catch-suspect-at-concert-with-60000-people
to ignore the influence of the Internet, the second dismisses the
importance of White nationalism. I contend that we have to
understand both at the same time” (Daniels 2018, p. 61).
54. The term for the specific form of anti-Black racist mysogyny that
Black women experience is “mysogynoir” (Bailey and Trudy 2018).
55. Daniels 2017.
56. Thompson 2018a.
57. Wacquant 2005.
58. Taylor 2016.
59. Visit https://www.youtube.com/watch?v=9tucY7Jhhs4.
60. These remarks were made by an audience member at the Data for
Black Lives conference at MIT Media Lab in Cambridge, MA on
January 12, 2019.
61. Hardy 2016.
62. This turn is what scholars refer to as neoliberalism – “a peculiar
form of reason that configures all aspects of existence in economic
terms” (Brown 2015, p. 17).
63. Thompson 2018b.
64. I am indebted to legal scholar Patricia Williams for underscoring
this point: personal communication, November 9, 2018.
65. Weheliye 2014, p. 3.
66. Wynter 2003.
67. paperson 2017, p. 12.
68. This formulation is inspired by Jarmon 2013.
69. Pasquale 2014, p. 3.
70. Coleman 2009; Chun 2009.
https://www.youtube.com/watch?v=9tucY7Jhhs4
71. Perry (2011, p. 22) writes: “Americans have a long tradition of
reconciling inconsistencies between professed values and cultural
practices … Therefore, we do not experience cognitive dissonance
when such inconsistencies arise; rather, we cultivate explanations
that allow them to operate in tandem.”
72. Morgan 1975; Smedley 2007.
73. Perry 2018, p. 45.
74. Such care is often articulated in terms of the “precautionary
principle” as a way to manage the uncertainties associated with
technoscience, though too often it gets limited to questions of ethics
and safety rather than extending to issues of politics and
democracy. As adrienne maree brown (2017, p. 87) explains, “we
have to decentralize our idea of where solutions and decisions
happen, where ideas come from.”
75. Turan 2009.
76. Moore 2011.
77. See D’Ignazio and Klein (2019) for a discussion of “data feminism”
where the focus is not just on gender but on power more broadly.
78. As Toni Cade Bambara (1970, p. 110) famously cautioned in a
different context, “[n]ot all speed is movement.”
79. Braun 2014.
80. Daly 2014.
81. Daly 2014.
82. Marche 2012.
83. Kurzgesagt 2016.
84. Turse 2016.
85. Ridley 2015.
86. Castells 2009, p. 5.
87. Van Dijk 2006.
88. See Daniels 2013. Daniels also says: “According to the Pew
Research Center’s Internet & American Life Project … African–
Americans and English-speaking Latinos continue to be among the
most active users of the mobile web. Cell phone ownership is higher
among African Americans and Latinos than among Whites (87
percent versus 80 percent) and minority cell phone owners take
advantage of a much greater range of their phones’ features
compared with white mobile phone users” (2013, p. 698).
89. Everett 2002, p. 133.
90. “Though rarely represented today as full participants in the
information technology revolution, Black people are among the
earliest adopters and comprise some of the most ardent and
innovative users of IT (information technology). It is too often
widespread ignorance of African Diasporic people’s long history of
technology adoption that limits fair and fiscally sound IT
investments, policies and opportunities for Black communities
locally and globally. Such racially aligned politics of investment
create a self-fulfilling-prophesy or circular logic wherein the lack of
equitable access to technology in Black communities produces a
corresponding lack of technology literacy and competencies” (from
http://international.ucla.edu/africa/event/1761, the home page of
AfroGEEKS: From Technophobia to Technophilia).
91. Nakamura 2002, pp. 22–3.
92. Nelson 2002, p. 1.
93. Nakamura 2002; 2008.
94. Nelson 2002, p. 1.
95. Noble 2018, p. 5; Browne 2015, p. 7.
96. Browne 2015, pp. 8–9.
http://international.ucla.edu/africa/event/1761
97. See Jasanoff (2004, p. 3) for an elaboration on co-production: co-
production is a “shorthand for the proposition that the ways in
which we know and represent the world (both nature and society)
are inseparable from the ways in which we choose to live in it.
Knowledge and its material embodiments [e.g. technology] are at
once products of social work and constitutive of forms of social life;
society cannot function without knowledge any more than
knowledge can exist without appropriate social supports. Scientific
knowledge, in particular, is not a transcendent mirror of reality. It
both embeds and is embedded in social practices, identities, norms,
conventions, discourses, instruments and institutions – in short, in
all the building blocks of what we term the social. The same can be
said even more forcefully of technology” (p. 3).
98. I am inspired here by paperson’s (2017, p. 5) discussion of
“hotwiring” settler colonial technologies: “Instead of settler
colonialism as an ideology, or as history, you might consider settler
colonialism as a set of technologies – a frame that could help you to
forecast colonial next operations and to plot decolonial directions …
Technologies mutate, and so do these relationships.”
99. Samatar 2015; I am indebted to Fatima Siwaju, whose question
about methodology during the 2018 African American Studies
Faculty-Graduate Seminar prompted me to elaborate my thinking
here.
100. Jackson 2013, p. 16.
101. Jackson 2013, p. 14.
102. Jackson 2013, p. 153.
103. The concept “imperialist White supremacist capitalist patriarchy”
was coined by bell hooks (2015); it was intended to pick out the
interlocking systems of domination also theorized by Crenshaw
(1991) and Collins (1990).
1
Engineered Inequity
Are Robots Racist?
WELCOME TO THE FIRST INTERNATIONAL BEAUTY CONTEST
JUDGED BY ARTIFICIAL INTELLIGENCE.
So goes the cheery announcement for Beauty AI, an initiative
developed by the Australian- and Hong Kongbased organization Youth
Laboratories in conjunction with a number of companies who worked
together to stage the first ever beauty contest judged by robots (Figure
1.1).1 The venture involved a few seemingly straightforward steps:
1. Contestants download the Beauty AI app.
2. Contestants make a selfie.
3. Robot jury examines all the photos.
4. Robot jury chooses a king and a queen.
5. News spreads around the world.
As for the rules, participants were not allowed to wear makeup or
glasses or to don a beard. Robot judges were programmed to assess
contestants on the basis of wrinkles, face symmetry, skin color,
gender, age group, ethnicity, and “many other parameters.” Over
6,000 submissions from approximately 100 countries poured in. What
could possibly go wrong?
Figure 1.1 Beauty AI
Source: http://beauty.ai
On August 2, 2016, the creators of Beauty AI expressed dismay at the
fact that “the robots did not like people with dark skin.” All 44 winners
across the various age groups except six were White, and “only one
finalist had visibly dark skin.”2 The contest used what was considered
at the time the most advanced machine-learning technology available.
Called “deep learning,” the software is trained to code beauty using
pre-labeled images, then the images of contestants are judged against
the algorithm’s embedded preferences.3 Beauty, in short, is in the
trained eye of the algorithm.
As one report about the contest put it, “[t]he simplest explanation for
biased algorithms is that the humans who create them have their own
deeply entrenched biases. That means that despite perceptions that
algorithms are somehow neutral and uniquely objective, they can often
reproduce and amplify existing prejudices.”4 Columbia University
professor Bernard Harcourt remarked: “The idea that you could come
up with a culturally neutral, racially neutral conception of beauty is
simply mind-boggling.” Beauty AI is a reminder, Harcourt notes, that
humans are really doing the thinking, even when “we think it’s neutral
and scientific.”5 And it is not just the human programmers’ preference
for Whiteness that is encoded, but the combined preferences of all the
http://beauty.ai
humans whose data are studied by machines as they learn to judge
beauty and, as it turns out, health.
In addition to the skewed racial results, the framing of Beauty AI as a
kind of preventative public health initiative raises the stakes
considerably. The team of biogerontologists and data scientists
working with Beauty AI explained that valuable information about
people’s health can be gleaned by “just processing their photos” and
that, ultimately, the hope is to “find effective ways to slow down ageing
and help people look healthy and beautiful.”6 Given the overwhelming
Whiteness of the winners and the conflation of socially biased notions
of beauty and health, darker people are implicitly coded as unhealthy
and unfit – assumptions that are at the heart of scientific racism and
eugenic ideology and policies.
Deep learning is a subfield of machine learning in which “depth” refers
to the layers of abstraction that a computer program makes, learning
more “complicated concepts by building them out of simpler ones.”7
With Beauty AI, deep learning was applied to image recognition; but it
is also a method used for speech recognition, natural language
processing, video game and board game programs, and even medical
diagnosis. Social media filtering is the most common example of deep
learning at work, as when Facebook auto-tags your photos with
friends’ names or apps that decide which news and advertisements to
show you to increase the chances that you’ll click. Within machine
learning there is a distinction between “supervised” and
“unsupervised” learning. Beauty AI was supervised, because the
images used as training data were pre-labeled, whereas unsupervised
deep learning uses data with very few labels. Mark Zuckerberg refers
to deep learning as “the theory of the mind … How do we model – in
machines – what human users are interested in and are going to do?”8
But the question for us is, is there only one theory of the mind, and
whose mind is it modeled on?
It may be tempting to write off Beauty AI as an inane experiment or
harmless vanity project, an unfortunate glitch in the otherwise neutral
development of technology for the common good. But, as explored in
the pages ahead, such a conclusion is naïve at best. Robots exemplify
how race is a form of technology itself, as the algorithmic judgments of
Beauty AI extend well beyond adjudicating attractiveness and into
questions of health, intelligence, criminality, employment, and many
other fields, in which innovative techniques give rise to newfangled
forms of racial discrimination. Almost every day a new headline
sounds the alarm, alerting us to the New Jim Code:
“Some algorithms are racist”
“We have a problem: Racist and sexist robots”
“Robots aren’t sexist and racist, you are”
“Robotic racists: AI technologies could inherit their creators’ biases”
Racist robots, as I invoke them here, represent a much broader
process: social bias embedded in technical artifacts, the allure of
objectivity without public accountability. Race as a form of technology
– the sorting, establishment and enforcement of racial hierarchies
with real consequences – is embodied in robots, which are often
presented as simultaneously akin to humans but different and at times
superior in terms of efficiency and regulation of bias. Yet the way
robots can be racist often remains a mystery or is purposefully hidden
from public view.
Consider that machine-learning systems, in particular, allow officials
to outsource decisions that are (or should be) the purview of
democratic oversight. Even when public agencies are employing such
systems, private companies are the ones developing them, thereby
acting like political entities but with none of the checks and balances.
They are, in the words of one observer, “governing without a
mandate,” which means that people whose lives are being shaped in
ever more consequential ways by automated decisions have very little
say in how they are governed.9
For example, in Automated Inequality Virginia Eubanks (2018)
documents the steady incorporation of predictive analytics by US
social welfare agencies. Among other promises, automated decisions
aim to mitigate fraud by depersonalizing the process and by
determining who is eligible for benefits.10 But, as she documents,
these technical fixes, often promoted as benefiting society, end up
hurting the most vulnerable, sometimes with deadly results. Her point
is not that human caseworkers are less biased than machines – there
are, after all, numerous studies showing how caseworkers actively
discriminate against racialized groups while aiding White applicants
deemed more deserving.11 Rather, as Eubanks emphasizes, automated
welfare decisions are not magically fairer than their human
counterparts. Discrimination is displaced and accountability is
outsourced in this postdemocratic approach to governing social life.12
So, how do we rethink our relationship to technology? The answer
partly lies in how we think about race itself and specifically the issues
of intentionality and visibility.
I Tinker, Therefore I Am
Humans are toolmakers. And robots, we might say, are humanity’s
finest handiwork. In popular culture, robots are typically portrayed as
humanoids, more efficient and less sentimental than Homo sapiens.
At times, robots are depicted as having human-like struggles,
wrestling with emotions and an awakening consciousness that blurs
the line between maker and made. Studies about how humans
perceive robots indicate that, when that line becomes too blurred, it
tends to freak people out. The technical term for it is the “uncanny
valley” – which indicates the dip in empathy and increase in revulsion
that people experience when a robot appears to be too much like us.13
Robots are a diverse lot, with as many types as there are tasks to
complete and desires to be met: domestic robots; military and police
robots; sex robots; therapeutic robots – and more. A robot is any
machine that can perform a task, simple or complex, directed by
humans or programmed to operate automatically. The most advanced
are smart machines designed to learn from and adapt to their
environments, created to become independent of their makers. We
might like to think that robotic concerns are a modern phenomenon,14
but our fascination with automata goes back to the Middle Ages, if not
before.15
In An Anthropology of Robots and AI, Kathleen Richardson observes
that the robot has “historically been a way to talk about
dehumanization” and, I would add, not talk about racialization.16 The
etymology of the word robot is Czech; it comes from a word for
“compulsory service,” itself drawn from the Slav robota (“servitude,
hardship”).17 So yes, people have used robots to express anxieties over
annihilation, including over the massive threat of war machines. But
robots also convey an ongoing agitation about human domination over
other humans!18
The first cultural representation that employed the word robot was a
1920 play by a Czech writer whose machine was a factory worker of
limited consciousness.19 Social domination characterized the cultural
laboratory in which robots were originally imagined. And, technically,
people were the first robots. Consider media studies scholar Anna
Everett’s earliest experiences using a computer:
In powering up my PC, I am confronted with the DOS-based text
that gave me pause … “Pri. Master Disk, Pri. Slave Disk, Sec.
Master, Sec. Slave.” Programmed here is a virtual hierarchy
organizing my computer’s software operations … I often
wondered why the programmers chose such signifiers that hark
back to our nation’s ignominious past … And even though I
resisted the presumption of a racial affront or intentionality in
such a peculiar deployment of the slave and master coupling, its
choice as a signifier of the computer’s operations nonetheless
struck me.20
Similarly, a 1957 article in Mechanix Illustrated, a popular “how-to-
do” magazine that ran from 1928 to 2001, predicted that, by 1965:
Slavery will be back! We’ll all have personal slaves again … [who
will] dress you, comb your hair and serve meals in a jiffy. Don’t be
alarmed. We mean robot “slaves.”21
It goes without saying that readers, so casually hailed as “we,” are not
the descendants of those whom Lincoln freed. This fact alone offers a
glimpse into the implicit Whiteness of early tech culture. We cannot
assume that the hierarchical values and desires that are projected onto
“we” – We, the People with inalienable rights and not You, the
Enslaved who serve us meals – are simply a thing of the past (Figure
1.2).
Coincidentally, on my way to give a talk – mostly to science,
technology, engineering, and mathematics (STEM) students at Harvey
Mudd College – that I had planned to kick off with this Mechanix ad, I
passed two men in the airport restaurant and overheard one say to the
other: “I just want someone I can push around …” So simple yet so
profound in articulating a dominant and dominating theory of power
that many more people feel emboldened to state, unvarnished, in the
age of Trump. Push around? I wondered, in the context of work or
dating or any number of interactions. The slavebot, it seems, has a
ready market!
For those of us who believe in a more egalitarian notion of power, of
collective empowerment without domination, how we imagine our
relation to robots offers a mirror for thinking through and against race
as technology.
Figure 1.2 Robot Slaves
Source: Binder 1957
It turns out that the disposability of robots and the denigration of
racialized populations go hand in hand. We can see this when police
officers use “throwbots” – “a lightweight, ruggedized platform that can
literally be thrown into position, then remotely controlled from a
position of safety” – to collect video and audio surveillance for use by
officers. In the words of a member of one of these tactical teams, “[t]he
most significant advantage of the throwable robot is that it ‘allows
them [sc. the officers] to own the real estate with their eyes, before
they pay for it with their bodies.’”22 Robots are not the only ones
sacrificed on the altar of public safety. So too are the many Black
victims whose very bodies become the real estate that police officers
own in their trigger-happy quest to keep the peace. The intertwining
history of machines and slaves, in short, is not simply the stuff of fluff
magazine articles.23
While many dystopic predictions signal a worry that humans may one
day be enslaved by machines, the current reality is that the tech labor
force is already deeply unequal across racial and gender lines.
Although not the same as the structure of enslavement that serves as
an analogy for unfreedom, Silicon Valley’s hierarchy consists of the
highest-paid creatives and entrepreneurs, who are comprised of White
men and a few White women, and the lowest-paid manual laborers –
“those cleaning their offices and assembling circuit boards,” in other
words “immigrants and outsourced labor, often women living in the
global south,” who usually perform this kind of work.24 The “diasporic
diversity” embodied by South Asian and Asian American tech
workforce does not challenge this hierarchy, because they continue to
be viewed as a “new digital ‘different caste.’” As Nakamura notes, “no
amount of work can make them part of the digital economy as
‘entrepreneurs’ or the ‘new economic men.’”25 Racism, in this way, is a
technology that is “built into the tech industry.”26 But how does racism
“get inside” and operate through new forms of technology?
To the extent that machine learning relies on large, “naturally
occurring” datasets that are rife with racial (and economic and
gendered) biases, the raw data that robots are using to learn and make
decisions about the world reflect deeply ingrained cultural prejudices
and structural hierarchies.27 Reflecting on the connection between
workforce diversity and skewed datasets, one tech company
representative noted that, “if the training data is produced by a racist
society, it won’t matter who is on the team, but the people who are
affected should also be on the team.”28 As machines become more
“intelligent,” that is, as they learn to think more like humans, they are
likely to become more racist. But this is not inevitable, so long as we
begin to take seriously and address the matter of how racism
structures the social and technical components of design.
Raising Robots
So, are robots racist? Not if by “racism” we only mean white hoods and
racial slurs.29 Too often people assume that racism and other forms of
bias must be triggered by an explicit intent to harm; for example,
linguist John McWhorter argued in Time magazine that “[m]achines
cannot, themselves, be racists. Even equipped with artificial
intelligence, they have neither brains nor intention.”30 But this
assumes that self-conscious intention is what makes something racist.
Those working in the belly of the tech industry know that this
conflation will not hold up to public scrutiny. As one Google
representative lamented, “[r]ather than treating malfunctioning
algorithms as malfunctioning machines (‘classification errors’), we are
increasingly treating tech like asshole humans.” He went on to propose
that “we [programmers] need to stop the machine from behaving like a
jerk because it can look like it is being offensive on purpose.”31 If
machines are programmed to carry out tasks, both they and their
designers are guided by some purpose, that is to say, intention. And in
the face of discriminatory effects, if those with the power to design
differently choose business as usual, then they are perpetuating a
racist system whether or not they are card-carrying members of their
local chapter of Black Lives Matter.
Robots are not sentient beings, sure, but racism flourishes well beyond
hate-filled hearts.32 An indifferent insurance adjuster who uses the
even more disinterested metric of a credit score to make a seemingly
detached calculation may perpetuate historical forms of racism by
plugging numbers in, recording risk scores, and “just doing her job.”
Thinking with Baldwin, someone who insists on his own racial
innocence despite all evidence to the contrary “turns himself into a
monster.”33 No malice needed, no N-word required, just lack of
concern for how the past shapes the present – and, in this case, the US
government’s explicit intention to concentrate wealth in the hands of
White Americans, in the form of housing and economic policies.34
Detachment in the face of this history ensures its ongoing codification.
Let us not forget that databases, just like courtrooms, banks, and
emergency rooms, do not contain organic brains. Yet legal codes,
financial practices, and medical care often produce deeply racist
outcomes.
The intention to harm or exclude may guide some technical design
decisions. Yet even when they do, these motivations often stand in
tension with aims framed more benevolently. Even police robots who
can use lethal force while protecting officers from harm are clothed in
the rhetoric of public safety.35 This is why we must separate
“intentionality” from its strictly negative connotation in the context of
racist practices, and examine how aiming to “do good” can very well
coexist with forms of malice and neglect.36 In fact a do-gooding ethos
often serves as a moral cover for harmful decisions. Still, the view that
ill intent is always a feature of racism is common: “No one at Google
giggled while intentionally programming its software to mislabel black
people.”37 Here McWhorter is referring to photo-tagging software that
classified dark-skinned users as “gorillas.” Having discovered no
bogeyman behind the screen, he dismisses the idea of “racist
technology” because that implies “designers and the people who hire
them are therefore ‘racists.’” But this expectation of individual intent
to harm as evidence of racism is one that scholars of race have long
rejected.38
We could expect a Black programmer, immersed as she is in the same
systems of racial meaning and economic expediency as the rest of her
co-workers, to code software in a way that perpetuates racist
stereotypes. Or, even if she is aware and desires to intervene, will she
be able to exercise the power to do so? Indeed, by focusing mainly on
individuals’ identities and overlooking the norms and structures of the
tech industry, many diversity initiatives offer little more than cosmetic
change, demographic percentages on a company pie chart, concealing
rather than undoing the racist status quo.39
So, can robots – and, by extension, other types of technologies – be
racist? Of course they can. Robots, designed in a world drenched in
racism, will find it nearly impossible to stay dry. To a certain extent,
they learn to speak the coded language of their human parents – not
only programmers but all of us online who contribute to “naturally
occurring” datasets on which AI learn. Just like diverse programmers,
Black and Latinx police officers are known to engage in racial profiling
alongside their White colleagues, though they are also the target of
harassment in a way their White counterparts are not.40 One’s
individual racial identity offers no surefire insulation from the
prevailing ideologies.41 There is no need to identify “giggling
programmers” self-consciously seeking to denigrate one particular
group as evidence of discriminatory design. Instead, so much of what
is routine, reasonable, intuitive, and codified reproduces unjust social
arrangements, without ever burning a cross to shine light on the
problem.42
A representative of Microsoft likened the care they must exercise when
they create and sell predictive algorithms to their customers with
“giving a puppy to a three-year-old. You can’t just deploy it and leave it
alone because it will decay over time.”43 Likewise, describing the many
controversies that surround AI, a Google representative said: “We are
in the uncomfortable birthing stage of artificial intelligence.”44 Zeros
and ones, if we are not careful, could deepen the divides between
haves and have-nots, between the deserving and the undeserving –
rusty value judgments embedded in shiny new systems.
Interestingly, the MIT data scientists interviewed by anthropologist
Kathleen Richardson
were conscious of race, class and gender, and none wanted to
reproduce these normative stereotypes in the robots they created
… [They] avoided racially marking the “skin” of their creations …
preferred to keep their machines genderless, and did not speak in
class-marked categories of their robots as “servants” or “workers,”
but companions, friends and children.45
Richardson contrasts her findings to that of anthropologist Stefan
Helmreich, whose pioneering study of artificial life in the 1990s
depicts researchers as “ignorant of normative models of sex, race,
gender and class that are refigured in the computer simulations of
artificial life.”46 But perhaps the contrast is overdrawn, given that
colorblind, gender-neutral, and class-avoidant approaches to tech
development are another avenue for coding inequity. If data scientists
do indeed treat their robots like children, as Richardson describes,
then I propose a race-conscious approach to parenting artificial life –
one that does not feign colorblindness. But where should we start?
Automating Anti-Blackness
As it happens, the term “stereotype” offers a useful entry point for
thinking about the default settings of technology and society. It first
referred to a practice in the printing trade whereby a solid plate called
a “stereo” (from the ancient Greek adjective stereos, “firm,” “solid”)
was used to make copies. The duplicate was called a “stereotype.”47
The term evolved; in 1850 it designated an “image perpetuated
without change” and in 1922 was taken up in its contemporary
iteration, to refer to shorthand attributes and beliefs about different
groups. The etymology of this term, which is so prominent in everyday
conceptions of racism, urges a more sustained investigation of the
interconnections between technical and social systems.
To be sure, the explicit codification of racial stereotypes in computer
systems is only one form of discriminatory design. Employers resort to
credit scores to decide whether to hire someone, companies use
algorithms to tailor online advertisements to prospective customers,
judges employ automated risk assessment tools to make sentencing
and parole decisions, and public health officials apply digital
surveillance techniques to decide which city blocks to focus medical
resources. Such programs are able to sift and sort a much larger set of
data than their human counterparts, but they may also reproduce
long-standing forms of structural inequality and colorblind racism.
And these default settings, once fashioned, take on a life of their own,
projecting an allure of objectivity that makes it difficult to hold anyone
accountable.48 Paradoxically, automation is often presented as a
solution to human bias – a way to avoid the pitfalls of prejudicial
thinking by making decisions on the basis of objective calculations and
scores. So, to understand racist robots, we must focus less on their
intended uses and more on their actions. Sociologist of technology
Zeynep Tufekci describes algorithms as “computational agents who
are not alive, but who act in the world.”49 In a different vein,
philosopher Donna Haraway’s (1991) classic Simians, Cyborgs and
Women narrates the blurred boundary between organisms and
machines, describing how “myth and tool mutually constitute each
other.”50 She describes technologies as “frozen moments” that allow us
to observe otherwise “fluid social interactions” at work. These
“formalizations” are also instruments that enforce meaning –
including, I would add, racialized meanings – and thus help construct
the social world.51 Biased bots and all their coded cousins could also
help subvert the status quo by exposing and authenticating the
existence of systemic inequality and thus by holding up a “black
mirror” to society,52 challenging us humans to come to grips with our
deeply held cultural and institutionalized biases.53
Consider the simple corrections of our computer systems, where
words that signal undue privilege are not legible. The red line tells us
that only one of these phenomena, underserved and overserved, is
legitimate while the other is a mistake, a myth (Figure 1.3).
But power is, if anything, relational. If someone is experiencing the
underside of an unjust system, others, then, are experiencing its
upside. If employers are passing up your job application because they
associate negative qualities with your name, then there are more jobs
available for more appealing candidates. If, however, we do not have a
word to describe these excess jobs, power dynamics are harder to
discuss, much less intervene in. If you try this exercise today, your
spellcheck is likely to recognize both words, which reminds us that it is
possible to change technical systems so that they do not obscure or
distort our understanding and experience of social systems. And, while
this is a relatively simple update, we must make the same demand of
more complex forms of coded inequity and tune into the socially
proscribed forms of (in)visibility that structure their design.
Figure 1.3 Overserved
If we look strictly at the technical features of, say, automated soap
dispensers and predictive crime algorithms, we may be tempted to
hone in on their differences. When we consider the stakes, too, we
might dismiss the former as relatively harmless, and even a distraction
from the dangers posed by the latter. But rather than starting with
these distinctions, perhaps there is something to be gained by putting
them in the same frame to tease out possible relationships. For
instance, the very idea of hygiene – cleaning one’s hands and “cleaning
up” a neighborhood – echoes a racialized vocabulary. Like the Beauty
AI competition, many advertisements for soap conflate darker skin
tones with unattractiveness and more specifically with dirtiness, as did
an ad from the 1940s where a White child turns to a Black child and
asks, “Why doesn’t your mama wash you with fairy soap?” Or another
one, from 2017, where a Black woman changes into a White woman
after using Dove soap. The idea of hygiene, in other words, has been
consistently racialized, all the way from marketing to public policy. In
fact the most common euphemism for eugenics was “racial hygiene”:
ridding the body politic of unwanted populations would be akin to
ridding the body of unwanted germs. Nowadays we often associate
racial hygienists with the Nazi holocaust, but many early proponents
were the American progressives who understood eugenics to work as a
social uplift and a form of Americanization. The ancient Greek
etymon, eugeneia (ε γένεια), meant “good birth,” and this
etymological association should remind us how promises of goodness
often hide harmful practices. As Margaret Atwood writes, “Better
never means better for everyone … It always means worse, for some.”
Take a seemingly mundane tool for enforcing segregation – separate
water fountains – which is now an iconic symbol for the larger system
of Jim Crow. In isolation from the broader context of racial
classification and political oppression, a “colored” water fountain
could be considered trivial, though in many cases the path from
segregated public facilities to routine public lynching was not very
long. Similarly, it is tempting to view a “Whites only” soap dispenser
as a trivial inconvenience. In a viral video of two individuals, White
and Black, who show that their hotel soap dispenser does not work for
the latter, they are giggling as they expose the problem. But when we
situate in a broader racial context what appears to be an innocent
oversight, the path from restroom to courtroom might be shorter than
we expect.
That said, there is a straightforward explanation when it comes to the
soap dispenser: near infrared technology requires light to bounce back
from the user and activate the sensor, so skin with more melanin,
absorbing as it does more light, does not trigger the sensor. But this
strictly technical account says nothing about why this particular
sensor mechanism was used, whether there are other options, which
recognize a broader spectrum of skin tones, and how this problem was
overlooked during development and testing, well before the dispenser
was installed. Like segregated water fountains of a previous era, the
discriminatory soap dispenser offers a window onto a wider social
terrain. As the soap dispenser is, technically, a robot, this discussion
helps us consider the racism of robots and the social world in which
they are designed.
For instance, we might reflect upon the fact that the infrared
technology of an automated soap dispenser treats certain skin tones as
normative and upon the reason why this technology renders Black
people invisible when they hope to be seen, while other technologies,
for example facial recognition for police surveillance, make them
hypervisible when they seek privacy. When we draw different
technologies into the same frame, the distinction between “trivial” and
“consequential” breaks down and we can begin to understand how
Blackness can be both marginal and focal to tech development. For
this reason I suggest that we hold off on drawing too many bright lines
– good versus bad, intended versus unwitting, trivial versus
consequential. Sara Wachter-Boettcher, the author of Technically
Wrong, puts it thus: “If tech companies can’t get the basics right …
why should we trust them to provide solutions to massive societal
problems?”54 The issue is not simply that innovation and inequity can
go hand in hand but that a view of technology as value-free means that
we are less likely to question the New Jim Code in the same way we
would the unjust laws of a previous era, assuming in the process that
our hands are clean.
Engineered Inequity
In one of my favorite episodes of the TV show Black Mirror, we enter a
world structured by an elaborate social credit system that shapes every
encounter, from buying a coffee to getting a home loan. Every
interaction ends with people awarding points to one another through
an app on their phones; but not all the points are created equal. Titled
“Nosedive,” the episode follows the emotional and social spiral of the
main protagonist, Lacie, as she pursues the higher rank she needs in
order to qualify for an apartment in a fancy new housing development.
When Lacie goes to meet with a points coach to find out her options,
he tells her that the only way to increase her rank in such a short time
is to get “up votes from quality people. Impress those upscale folks,
you’ll gain velocity on your arc and there’s your boost.” Lacie’s routine
of exchanging five stars with service workers and other “mid- to low-
range folks” won’t cut it if she wants to improve her score quickly. As
the title of the series suggests, Black Mirror offers a vivid reflection on
the social dimensions of technology – where we are and where we
might be going with just a few more clicks in the same direction. And,
although the racialized dimensions are not often made very explicit,
there is a scene toward the beginning of the episode when Lacie
notices all her co-workers conspiring to purposely lower the ranking of
a Black colleague and forcing him into a subservient position as he
tries to win back their esteem … an explicit illustration of the New Jim
Code.
When it comes to engineered inequity, there are many different types
of “social credit” programs in various phases of prototype and
implementation that are used for scoring and ranking populations in
ways that reproduce and even amplify existing social hierarchies.
Many of these come wrapped in the packaging of progress. And, while
the idiom of the New Jim Code draws on the history of racial
domination in the United States as a touchstone for technologically
mediated injustice, our focus must necessarily reach beyond national
borders and trouble the notion that racial discrimination is isolated
and limited to one country, when a whole host of cross-cutting social
ideologies make that impossible.
Already being implemented, China’s social credit system is an
exemplar of explicit ranking with far-reaching consequences. What’s
more, Black Mirror is referenced in many of the news reports of
China’s experiment, which started in 2014, with the State Council
announcing its plans to develop a way to score the trustworthiness of
citizens. The government system, which will require mandatory
enrollment starting from 2020, builds on rating schemes currently
used by private companies.
Using proprietary algorithms, these apps track not only financial
history, for instance whether someone pays his bills on time or repays
her loans, but also many other variables, such as one’s educational,
work, and criminal history. As they track all one’s purchases,
donations, and leisure activities, something like too much time spent
playing video games marks the person as “idle” (for which points may
be docked), whereas an activity like buying diapers suggests that one is
“responsible.” As one observer put it, “the system not only investigates
behaviour – it shapes it. It ‘nudges’ citizens away from purchases and
behaviours the government does not like.”55 Most alarmingly (as this
relates directly to the New Jim Code), residents of China’s Xinjiang, a
predominantly Muslim province, are already being forced to download
an app that aims to track “terrorist and illegal content.”
Lest we be tempted to think that engineered inequity is a problem
“over there,” just recall Donald Trump’s idea to register all Muslims in
the United States on an electronic database – not to mention
companies like Facebook, Google, and Instagram, which already
collect the type of data employed in China’s social credit system.
Facebook has even patented a scoring system, though it hedges when
asked whether it will ever develop it further. Even as distinct histories,
politics, and social hierarchies shape the specific convergence of
innovation and inequity in different contexts, it is common to observe,
across this variation, a similar deployment of buzzwords, platitudes,
and promises.
What sets China apart (for now) is that all those tracked behaviors are
already being rated and folded into a “citizen score” that opens or
shuts doors, depending on one’s ranking.56 People are given low marks
for political misdeeds such as “spreading rumors” about government
officials, for financial misdeeds such as failing to pay a court fine, or
social misdeeds such as spending too much time playing video games.
A low score brings on a number of penalties and restrictions, barring
people from opportunities such as a job or a mortgage and prohibiting
certain purchases, for example plane tickets or train passes.57 The
chief executive of one of the companies that pioneered the scoring
system says that it “will ensure that the bad people in society don’t
have a place to go, while good people can move freely and without
obstruction.”58
Indeed, it is not only the desire to move freely, but all the additional
privileges that come with a higher score that make it so alluring: faster
service, VIP access, no deposits on rentals and hotels – not to mention
the admiration of friends and colleagues. Like so many other
technological lures, systems that seem to objectively rank people on
the basis of merit and things we like, such as trustworthiness, invoke
“efficiency” and “progress” as the lingua franca of innovation. China’s
policy states: “It will forge a public opinion environment where
keeping trust is glorious. It will strengthen sincerity in government
affairs, commercial sincerity, social sincerity and the construction of
judicial credibility.”59 In fact, higher scores have become a new status
symbol, even as low scorers are a digital underclass who may, we are
told, have an opportunity to climb their way out of the algorithmic
gutter.
Even the quality of people in one’s network can affect your score – a
bizarre scenario that has found its way onto TV shows like Black
Mirror and Community, where even the most fleeting interpersonal
interactions produce individual star ratings, thumbs up and down,
giving rise to digital elites and subordinates. As Zeynep Tufekci
explains, the ubiquitous incitement to “like” content on Facebook is
designed to accommodate the desires of marketers and works against
the interests of protesters, who want to express dissent by “disliking”
particular content.60 And, no matter how arbitrary or silly the credit
(see “meow meow beenz” in the TV series Community), precisely
because people and the state invest it with import, the system carries
serious consequences for one’s quality of life, until finally the pursuit
of status spins out of control.
The phenomenon of measuring individuals not only by their behavior
but by their networks takes the concept of social capital to a whole new
level. In her work on marketplace lenders, sociologist Tamara K.
Nopper considers how these companies help produce and rely on what
she calls digital character – a “profile assessed to make inferences
regarding character in terms of credibility, reliability, industrious,
responsibility, morality, and relationship choices.”61 Automated social
credit systems make a broader principle of merit-based systems clear:
scores assess a person’s ability to conform to established definitions of
good behavior and valued sociality rather than measuring any intrinsic
quality. More importantly, the ideological commitments of dominant
groups typically determine what gets awarded credit in the first place,
automating social reproduction. This implicates not only race and
ethnicity; depending on the fault lines of a given society, merit systems
also codify class, caste, sex, gender, religion, and disability oppression
(among other factors). The point is that multiple axes of domination
typically converge in a single code.
Take the credit associated with the aforementioned categories of
playing video games and buying diapers. There are many ways to parse
the values embedded in the distinction between the “idle” and the
“responsible” citizen so that it lowers the scores of gamers and
increases the scores of diaper changers. There is the ableist logic,
which labels people who spend a lot of time at home as
“unproductive,” whether they play video games or deal with a chronic
illness; the conflation of economic productivity and upright citizenship
is ubiquitous across many societies.
Consider, too, how gender norms are encoded in the value accorded to
buying diapers, together with the presumption that parenthood
varnishes (and, by extension, childlessness tarnishes) one’s character.
But one may wonder about the consequences of purchasing too many
diapers. Does reproductive excess lower one’s credit? Do assumptions
about sex and morality, often fashioned by racist and classist views,
shape the interpretation of having children and of purchasing diapers?
In the United States, for instance, one could imagine the eugenic
sensibility that stigmatizes Black women’s fertility and celebrates
White women’s fecundity getting codified through a system that
awards points for diapers purchased in suburban zip codes and
deducts points for the same item when purchased in not yet gentrified
parts of the city – the geography of social worth serving as a proxy for
gendered racism and the New Jim Code. In these various scenarios,
top-down reproductive policies could give way to a social credit system
in which the consequences of low scores are so far-reaching that they
could serve as a veritable digital birth control.
In a particularly poignant exchange toward the end of the “Nosedive”
episode, Lacie is hitchhiking her way to win the approval of an elite
group of acquaintances; and motorists repeatedly pass her by on
account of her low status. Even though she knows the reason for being
disregarded, when a truck driver of even lower rank kindly offers to
give her a ride, Lacie looks down her nose at the woman (“nosedive”
indeed). She soon learns that the driver has purposefully opted out of
the coercive point system and, as they make small talk, the trucker
says that people assume that, with such a low rank, she must be an
“antisocial maniac.” Lacie reassures the woman by saying you “seem
normal.” Finally, the trucker wonders about Lacie’s fate: “I mean
you’re a 2.8 but you don’t look 2.8.” This moment is illuminating as to
how abstract quantification gets embodied – that the difference
between a 2.8 and a 4.0 kind of person should be self-evident and
readable on the (sur)face. This is a key feature of racialization: we take
arbitrary qualities (say, social score, or skin color), imbue them with
cultural importance, and then act as if they reflected natural qualities
in people (and differences between them) that should be obvious just
by looking at someone.62
In this way speculative fiction offers us a canvas for thinking about the
racial vision that we take for granted in our day-to-day lives. The
White protagonist, in this case, is barred from housing, transportation,
and relationships – a fictional experience that mirrors the forms of
ethno-racial exclusions that many groups have actually experienced;
and Lacie’s low status, just like that of her real-life counterparts, is
attributed to some intrinsic quality of her person rather than to the
coded inequity that structures her social universe. The app, in this
story, builds upon an already existing racial arithmetic, expanding the
terms of exclusion to those whose Whiteness once sheltered them
from harm. This is the subtext of so much science fiction: the anxiety
that, if “we” keep going down this ruinous road, then we might be
next.
Ultimately the danger of the New Jim Code positioning is that existing
social biases are reinforced – yes. But new methods of social control
are produced as well. Does this mean that every form of technological
prediction or personalization has racist effects? Not necessarily. It
means that, whenever we hear the promises of tech being extolled, our
antennae should pop up to question what all that hype of “better,
faster, fairer” might be hiding and making us ignore. And, when bias
and inequity come to light, “lack of intention” to harm is not a viable
alibi. One cannot reap the reward when things go right but downplay
responsibility when they go wrong.
Notes
1. Visit Beauty.AI First Beauty Contest Judged by Robots, at
http://beauty.ai.
2. Pearson 2016b.
3. Pearson 2016b.
4. Levin 2016.
5. Both Harcourt quotations are from Levin 2016.
6. See http://beauty.ai.
7. See https://machinelearningmastery.com/what-is-deep-learning.
8. Metz 2013.
9. Field note, Jack Clark’s Keynote Address at the Princeton University
AI and Ethics Conference, March 10, 2018.
10. The flip side of personalization is what Eubanks (2018) refers to as
an “empathy override.” See also Edes 2018.
11. Fox 2012, n.p.
12. “Homelessness is not a systems engineering problem, it’s a
carpentry problem” (Eubanks 2018, p. 125).
13. The term “uncanny valley” was coined by Masahiro Mori in 1970
and translated into English by Reichardt (1978).
14. But it is worth keeping in mind that many things dubbed “AI”
http://beauty.ai
http://beauty.ai
today are, basically, just statistical predictions rebranded in the age
of big data – an artificial makeover that engenders more trust as a
result. This point was made by Arvind Narayanan in response to a
Microsoft case study at a workshop sponsored by the Princeton
University Center for Human Values and Center for Informational
Technology Policy, October 6, 2017.
15. Truitt 2016.
16. Richardson 2015, p. 5.
17. Richardson 2015, p. 2.
18. As Imani Perry (2018, p. 49) explains, “Mary Shelley’s
Frankenstein provided a literary example of the domestic anxiety
regarding slavery and colonialism that resulted from this structure
of relations … Frankenstein’s monster represented the fear of the
monstrous products that threatened to flow from the peculiar
institutions. The novel lends itself to being read as a response to
slave revolts across the Atlantic world. But it can also be read as
simply part of anxiety attendant to a brutal and intimate
domination, one in which the impenetrability of the enslaved was
already threatening.”
19. Richardson 2015, p. 2.
20. Everett 2009, p. 1.
21. Binder 1957.
22. These passages come from a PoliceOne report that cautions us: “as
wonderful an asset as they are, they cannot provide a complete
picture. The camera eye can only see so much, and there are many
critical elements of information that may go undiscovered or
unrecognized … Throwable robots provide such an advance in
situational awareness that it can be easy to forget that our
understanding of the situation is still incomplete” (visit
https://www.policeone.com/police-products/police-
technology/robots/articles/320406006–5-tactical-considerations-
for-throwable-robot-deployment).
https://www.policeone.com/police-products/police-technology/robots/articles/320406006-5-tactical-considerations-for-throwable-robot-deployment
23. Rorty 1962.
24. Daniels 2015, p. 1379. See also Crain et al. 2016; Gajjala 2004;
Hossfeld 1990; Pitti 2004; Shih 2006.
25. Nakamura 2002, p. 24.
26. Daniels 2013, p. 679.
27. Noble and Tynes 2016.
28. Field note from the Princeton University Center for Human Values
and Center for Informational Technology Policy Workshop, October
6, 2017.
29. The notion of “racist robots” is typically employed in popular
discourse around AI. I use it as a rhetorical device to open up a
discussion about a range of contemporary technologies, most of
which are not human-like automata of the kind depicted in films
and novels. They include forms of automation integrated in
everyday life, like soap dispensers and search engines, bureaucratic
interventions that seek to make work more efficient, as in policing
and healthcare, and fantastical innovations first imagined in
science fiction, such as self-driving cars and crime prediction
techniques.
30. McWhorter 2016.
31. Field note from the Princeton University Center for Human Values
and Center for Informational Technology Policy Workshop, October
6, 2017.
32. The famed android Lieutenant Commander Data of the hit series
Star Trek understood well the distinction between inputs and
outputs, intent and action. When a roughish captain of a small
cargo ship inquired whether Data had ever experienced love, Data
responded, “The act or the emotion?” And when the captain replied
that they’re both the same, Data rejoined, “I believe that statement
to be inaccurate, sir.” Just as loving behavior does not require
gushing Valentine’s Day sentiment, so too can discriminatory action
be fueled by indifference and disregard, and even by good intention,
more than by flaming hatred.
33. Baldwin 1998, p. 129.
34. See
https://www.nclc.org/images/pdf/credit_discrimination/InsuranceScoringWhitePaper
35. Policeone.com, at https://www.policeone.com/police-
products/police-technology/robots.
36. This is brought to life in the 2016 HBO series Silicon Valley, which
follows a young Steve Jobs type of character, in a parody of the tech
industry. In a segment at TechCrunch, a conference where start-up
companies present their proof of concept to attract venture capital
investment, one presenter after another exclaims, “we’re making
the world a better place” with each new product that also claims to
“revolutionize” some corner of the industry. See
https://longreads.com/2016/06/13/silicon-valley-masterfully
skewers-tech-culture.
37. McWhorter 2016.
38. Sociologist Eduardo Bonilla-Silva (2006) argues that, “if racism is
systemic, this view of ‘good’ and ‘bad’ whites distorts reality” (p.
132). He quotes Albert Memmi saying: “There is a strange enigma
associated with the problem of racism. No one, or almost no one,
wishes to see themselves as racist; still, racism persists, real and
tenacious” (Bonilla-Silva 2006, p. 1).
39. Dobush 2016.
40. Perry explains how racial surveillance does not require a
“bogeyman behind the curtain; it is a practice that emerges from
our history, conflicts, the interests of capital, and political
expediency in the nation and the world … Nowhere is the diffuse
and individuated nature of this practice more apparent than in the
fact that over-policing is not limited to White officers but is instead
systemic” (Perry 2011, p. 105).
https://www.nclc.org/images/pdf/credit_discrimination/InsuranceScoringWhitePaper
https://www.policeone.com/police-products/police-technology/robots
https://longreads.com/2016/06/13/silicon-valley-masterfully
41. Calling for a post-intentional analysis of racism, Perry argues that
intent is not a good measure of discrimination because it “creates a
line of distinction between ‘racist’ and ‘acceptable’ that is
deceptively clear in the midst of a landscape that is, generally
speaking, quite unclear about what racism and racial bias are, who
[or what] is engaging in racist behaviors, and how they are doing
so” (Perry 2011, p. 21).
42. Schonbrun 2017.
43. Field note from the Princeton University Center for Human Values
and Center for Informational Technology Policy Workshop, October
6, 2017.
44. Field note from the Princeton University Center for Human Values
and Center for Informational Technology Policy Workshop, October
6, 2017.
45. Richardson 2015, p. 12.
46. Richardson 2015, p. 12; see also Helmreich 1998.
47. See s.v. “stereotype” at https://www.etymonline.com/
word/stereotype (Online Etymology Dictionary).
48. “It is to say, though, that all those inhabiting subject positions of
racial power and domination – notably those who are racially White
in its various formulations in different racially articulated societies
– project and extend racist socialities by default. But the default is
not the only position to occupy or in which to invest. One remains
with the default because it is given, the easier to inhabit, the
sociality of thoughtlessness” (Goldberg 2015, pp. 159–60).
49. Tufekci 2015, p. 207.
50. Haraway 1991, p. 164.
51. Haraway 1991, p. 164.
52. This potential explains the name of the provocative TV series Black
Mirror.
https://www.etymonline.com/
53. According to Feagin and Elias (2013, p. 936), systemic racism
refers to “the foundational, large-scale and inescapable hierarchical
system of US racial oppression devised and maintained by whites
and directed at people of colour … [It] is foundational to and
engineered into its major institutions and organizations.”
54. Wachter-Boettcher 2017, p. 200. On the same page, the author also
argues that “[w]e’ll only be successful in ridding tech of excesses
and oversights if we first embrace a new way of seeing the digital
tools we rely on – not as a wonder, or even as a villain, but rather as
a series of choices that designers and technologists have made.
Many of them small: what a button says, where a data set comes
from. But each of these choices reinforces beliefs about the world,
and the people in it.”
55. Botsman 2017.
56. Nguyen 2016.
57. Morris 2018.
58. State Council 2014.
59. State Council 2014.
60. Tufekci 2017, p. 128.
61. Nopper 2019, p. 170.
62. Hacking 2007.
2
Default Discrimination
Is the Glitch Systemic?
GLITCH
a minor problem
a false or spurious electronic signal
a brief or sudden interruption or irregularity
may derive from Yiddish, glitsh – to slide, glide, “slippery place.”1
When Princeton University media specialist Allison Bland was driving
through Brooklyn, the Google Maps narrator directed her to “turn
right on Malcolm Ten Boulevard,” verbally interpreting the X in the
street name as a Roman numeral rather than as referring to the Black
liberation leader who was assassinated in New York City in 1965
(Figure 2.1).
Social and legal codes, like their byte-size counterparts, are not
neutral; nor are all codes created equal. They reflect particular
perspectives and forms of social organization that allow some people
to assert themselves – their assumptions, interests, and desires – over
others. From the seemingly mundane to the extraordinary, technical
systems offer a mirror to the wider terrain of struggle over the forces
that govern our lives.
Figure 2.1 Malcolm Ten
Source: Twitter @alliebland, November 19, 2013, 9:42 p.m.
Database design, in that way, is “an exercise in worldbuilding,” a
normative process in which programmers are in a position to project
their world views – a process that all too often reproduces the
technology of race.2 Computer systems are a part of the larger matrix
of systemic racism. Just as legal codes are granted an allure of
objectivity – “justice is (color)blind” goes the fiction – there is
enormous mystique around computer codes, which hides the human
biases involved in technical design.
The Google Maps glitch is better understood as a form of displacement
or digital gentrification mirroring the widespread dislocation
underway in urban areas across the United States. In this case, the
cultural norms and practices of programmers – who are drawn from a
narrow racial, gender, and classed demographic – are coded into
technical systems that, literally, tell people where to go. These
seemingly innocent directions, in turn, reflect and reproduce
racialized commands that instruct people where they belong in the
larger social order.3
Ironically, this problem of misrecognition actually reflects a solution
to a difficult coding challenge. A computer’s ability to parse Roman
numerals, interpreting an “X” as “ten,” was a hard-won design
achievement.4 That is, from a strictly technical standpoint, “Malcolm
Ten Boulevard” would garner cheers. This illustrates how innovations
reflect the priorities and concerns of those who frame the problems to
be solved, and how such solutions may reinforce forms of social
dismissal, regardless of the intentions of individual programmers.
While most observers are willing to concede that technology can be
faulty, acknowledging the periodic breakdowns and “glitches” that
arise, we must be willing to dig deeper.5 A narrow investment in
technical innovation necessarily displaces a broader set of social
interests. This is more than a glitch. It is a form of exclusion and
subordination built into the ways in which priorities are established
and solutions defined in the tech industry. As Andrew Russell and Lee
Vinsel contend, “[t]o take the place of progress, ‘innovation,’ a smaller,
and morally neutral, concept arose. Innovation provided a way to
celebrate the accomplishments of a high-tech age without expecting
too much from them in the way of moral and social improvement.”6
For this reason, it is important to question “innovation” as a
straightforward social good and to look again at what is hidden by an
idealistic vision of technology. How is technology already raced?
This chapter probes the relationship between glitch and design, which
we might be tempted to associate with competing conceptions of
racism. If we think of racism as something of the past or requiring a
particular visibility to exist, we can miss how the New Jim Code
operates and what seeming glitches reveal about the structure of
racism. Glitches are generally considered a fleeting interruption of an
otherwise benign system, not an enduring and constitutive feature of
social life. But what if we understand glitches instead to be a slippery
place (with reference to the possible Yiddish origin of the word)
between fleeting and durable, micro-interactions and macro-
structures, individual hate and institutional indifference? Perhaps in
that case glitches are not spurious, but rather a kind of signal of how
the system operates. Not an aberration but a form of evidence,
illuminating underlying flaws in a corrupted system.
Default Discrimination
At a recent workshop sponsored by a grassroots organization called
Stop LAPD Spying, the facilitator explained that community members
with whom she works might not know what algorithms are, but they
know what it feels like to be watched. Feelings and stories of being
surveilled are a form of “evidence,” she insisted, and community
testimony is data.7 As part of producing those data, the organizers
interviewed people about their experiences with surveillance and their
views on predictive policing. They are asked, for example: “What do
you think the predictions are based on?” One person, referring to the
neighborhood I grew up in, responded:
Because they over-patrol certain areas – if you’re only looking on
Crenshaw and you only pulling Black people over then it’s only
gonna make it look like, you know, whoever you pulled over or
whoever you searched or whoever you criminalized that’s gonna
be where you found something.8
Comments like this remind us that people who are most directly
impacted by the New Jim Code have a keen sense of the default
discrimination facilitated by these technologies. As a form of social
technology, institutional racism, past and present, is the precondition
for the carceral technologies that underpin the US penal system. At
every stage of the process – from policing, sentencing, and
imprisonment to parole – automated risk assessments are employed
to determine people’s likelihood of committing a crime.9 They
determine the risk profile of neighborhoods in order to concentrate
police surveillance, or the risk profile of individuals in order to
determine whether or for how long to release people on parole.
In a recent study of the recidivism risk scores assigned to thousands of
people arrested in Broward County, Florida, ProPublica investigators
found that the score was remarkably unreliable in forecasting violent
crime. They also uncovered significant racial disparities:
In forecasting who would re-offend, the algorithm made mistakes
with black and white defendants at roughly the same rate but in
very different ways. The formula was particularly likely to falsely
flag black defendants as future criminals, wrongly labeling them
this way at almost twice the rate as white defendants. White
defendants were mislabeled as low risk more often than black
defendants.10
The algorithm generating the risk score builds upon already existing
forms of racial domination and reinforces them precisely because the
apparatus ignores how race shapes the “weather.” Literary scholar
Christina Sharpe describes the weather as “the total climate; and the
climate is antiblack.”11 For example, the survey given to prospective
parolees to forecast the likelihood that they will recidivate includes
questions about their criminal history, education and employment
history, financial history, and neighborhood characteristics (among
many other factors). As all these variables are structured by racial
domination – from job market discrimination to ghettoization – the
survey measures the extent to which an individual’s life chances have
been impacted by racism without ever asking an individual’s race.12
Likewise, predictive policing software will always be more likely to
direct police to neighborhoods like the one I grew up in, because the
data that this software is drawing from reflect ongoing surveillance
priorities that target predominantly Black neighborhoods.13 Anti-
Blackness is no glitch. The system is accurately rigged, we might say,
because, unlike in natural weather forecasts, the weathermen are also
the ones who make it rain.14
Even those who purportedly seek “fairness” in algorithmic decision-
making are not usually willing to assert that the benchmark for
whether an automated prediction is “unwarranted” is whether it strays
from the proportion of a group in the larger population. That is, if a
prediction matches the current crime rate, it is still unjust! Even so,
many who are grappling with how to enact ethical practices in this
arena still use the crime rate as the default measure of whether an
algorithm is predicting fairly, when that very measure is a byproduct
of ongoing regimes of selective policing and punishment.15
Figure 2.2 Patented PredPol Algorithm
Source: http://www.predpol.com/technology
Interestingly, the most commonly used algorithm in Los Angeles and
elsewhere, called PredPol, is drawn directly from a model used to
predict earthquake aftershocks (Figure 2.2). As author of Carceral
Capitalism, Jackie Wang gives us this description: “In police
departments that use PredPol, officers are given printouts of
jurisdiction maps that are covered with red square boxes that indicate
where crime is supposed to occur throughout the day … The box is a
kind of temporary crime zone.” She goes on to ask:
What is the attitude or mentality of the officers who are patrolling
one of the boxes? When they enter one of the boxes, do they
expect to stumble upon a crime taking place? How might the
expectation of finding crime influence what the officers actually
find? Will people who pass through these tempory crime zones
while they are being patrolled by officers automatically be
perceived as suspicious? Could merely passing through one of the
red boxes constitute probable cause?16
Let me predict: yes. If we consider that institutional racism in this
country is an ongoing unnatural disaster, then crime prediction
algorithms should more accurately be called crime production
algorithms. The danger with New Jim Code predictions is the way in
which self-fulfilling prophecies enact what they predict, giving the
allure of accuracy. As the man behind PrepPol’s media strategy put it,
“it sounds like fiction, but its more like science fact.”17
Predicting Glitches
One of the most iconic scenes from The Matrix film trilogy deals with
the power of predictions and self-fulfilling prophecies. The main
protagonist, Neo, goes to visit the Oracle, a software program depicted
http://www.predpol.com/technology
as a Black woman in her late sixties. Neo is trying to figure out
whether he is who others think he is – “the one” who is supposed to
lead humanity in the war against the machines. As he tries to get a
straight answer from the Oracle and to figure out whether she really
has the gift of prophecy, she says, “I’d ask you to sit down, but you’re
not going to anyway. And don’t worry about the vase.”
NEO: What vase? [Neo knocks a vase to the floor]
THE ORACLE: That vase.
NEO: I’m sorry.
THE ORACLE: I said don’t worry about it. I’ll get one of
my kids to fix it.
NEO: How did you know?
THE ORACLE: What’s really going to bake your noodle later on is,
would you still have broken it if I hadn’t said anything.18
This scene invites a question about real-life policing: Would cops still
have warrants to knock down the doors in majority Black
neighborhoods if predictive algorithms hadn’t said anything?
The Matrix offers a potent allegory for thinking about power,
technology, and society. It is set in a dystopian future in which
machines overrun the world, using the energy generated by human
brains as a vital source of computing power. Most of humanity is held
captive in battery-like pods, their minds experiencing an elaborate life-
like simulation of the real world in order to pacify humans and
maximize the amount of energy brains produce. The film follows a
small band of freedom fighters who must convince Neo that the
simulated life he was living is in fact a digital construction.
Early on in his initiation to this new reality, Neo experiences a fleeting
moment of déjà vu when a black cat crosses his path – twice. Trinity,
his protector and eventual love interest, grows alarmed and explains
that this “glitch in the matrix” is not at all trivial but a sign that
something about the program has been changed by the agents of the
Matrix. The sensation of déjà vu is a warning sign that a confrontation
is imminent and that they should prepare to fight.
The film’s use of déjà vu is helpful for considering the relationship
between seemingly trivial technical glitches and meaningful design
decisions. The glitch in this context is a not an insignificant “mistake”
to be patched over, but rather serves as a signal of something
foundational about the structure of the world meant to pacify humans.
It draws attention to the construction and reconstruction of the
program and functions as an indication that those seeking freedom
should be ready to spring into action.
A decade before the Matrix first hit the big screen, Black feminist
theorist Patricia Hill Collins conceptualized systemic forms of
inequality in terms of a “matrix of domination” in which race, class,
gender, and other axes of power operated together, “as sites of
domination and as potential sites of resistance.”19 This interlocking
matrix operates at individual, group, and institutional levels, so that
empowerment “involves rejecting the dimensions of knowledge,
whether personal, cultural, or institutional, that perpetuate
objectification and dehumanization.”20 Relating this dynamic to the
question of how race “gets inside” technology, the Roman numeral
glitch of Google Maps and others like it urge us to look again at the
way our sociotechnical systems are constructed – by whom and to
what ends.
Racist glitches – such as celebrity chef Paula Dean’s admission that
“yes, of course” she has used the N-word alongside her desire to host a
“really southern plantation wedding” with all-Black servers;21 or a
tape-recorded phone call in which former Los Angeles Clippers owner
and real estate mogul Donald Sterling told a friend “[i]t bothers me a
lot that you want to broadcast that you’re associating with black
people”22 – come and go, as provocative sound bites muffling a deeper
social reckoning. In my second example, the scandal associated with
Sterling’s racist remarks stands in stark contrast with the hush and
acceptance of a documented pattern of housing discrimination
exercised over many years, wherein he refused to rent his properties to
Black and Latinx tenants in Beverly Hills and to non-Korean tenants
in LA’s Koreatown.23 In the midst of the suit brought by the
Department of Justice, the Los Angeles chapter of the National
Association for the Advancement of Colored People nevertheless
honored Sterling with a lifetime achievement award in 2009. Only
once his tape-recorded remarks went public in 2014 did the
organization back out of plans to award him this highest honor for a
second time, forcing the chapter president to resign amid criticism.
Dragging individuals as objects of the public condemnation of racist
speech has become a media ritual and pastime. Some may consider it a
distraction from the more insidious, institutionalized forms of racism
typified by Sterling’s real estate practices. The déjà vu regularity of all
those low-hanging N-words would suggest that stigmatizing
individuals is not much of a deterrent and rarely addresses all that
gives them license and durability.
But, as with Trinity’s response to Neo in the Matrix regarding his path
being crossed twice by a black cat, perhaps if we situated racist
“glitches” in the larger complex of social meanings and structures, we
too could approach them as a signal rather than as a distraction.
Sterling’s infamous phone call, in this case, would alert us to a deeper
pattern of housing discrimination, with far-reaching consequences.
Systemic Racism Reloaded
Scholars of race have long challenged the focus on individual “bad
apples,” often to be witnessed when someone’s racist speech is
exposed in the media – which is typically followed by business as
usual.24 These individuals are treated as glitches in an otherwise
benign system. By contrast, sociologists have worked to delineate how
seemingly neutral policies and norms can poison the entire “orchard”
or structure of society, systematically benefiting some while
subjugating others.25
Whereas racist glitches are often understood as transient, as signals
they can draw our attention to discriminatory design as a durable
feature of the social landscape since this nation’s founding. As
sociologists Joe Feagin and Sean Elias write, “[i]n the case of US
society, systemic racism is foundational to and engineered into its
major institutions and organizations.”26 This reorientation is also
exemplified by Eduardo Bonilla-Silva’s Racism without Racists, in
which he defines “racialized social systems, or white supremacy for
short … as the totality of the social relations and practices that
reinforce white privilege. Accordingly, the task of analysts interested
in studying racial structures is to uncover the particular social,
economic, political, social control, and ideological mechanisms
responsible for the reproduction of racial privilege in a society.”27
Taken together, this work builds upon the foundational insights of
Charles V. Hamilton and Kwame Ture (née Stokely Carmichael), who
developed the term “institutional racism” in 1967. While the authors
discuss the linkage between institutional racism and what they
describe as individual racism, they also state:
This is not to say that every single white American consciously
oppresses black people. He does not need to. Institutional racism
has been maintained deliberately by the power structure and
through indifference, inertia, and lack of courage on the part of
the white masses as well as petty officials … The line between
purposeful suppression and indifference blurs.28
But taking issue with the overwhelming focus on top-down forces that
characterize work on systemic racism, including Feagin and Elias’
“theory of oppression,” Michael Omi and Howard Winant highlight
the agency and resistance of those subordinated by such systems. They
say:
To theorize racial politics and the racial state, then, is to enter the
complex territory where structural racism encounters self-
reflective action, the radical practice of people of color (and their
white allies) in the United States. It is to confront the instability of
the US system of racial hegemony, in which despotism and
democracy coexist in seemingly permanent conflict.29
Strikingly, throughout this early work on institutional racism and
structural inequality, there was very little focus on the role of
technologies, beyond mass media, in advancing or undermining racial
ideologies and structures. As Jessie Daniels notes in “Race and Racism
in Internet Studies”:
The role of race in the development of Internet infrastructure and
design has largely been obscured (Taborn, 2008). As Sinclair
observes, “The history of race in America has been written as if
technologies scarcely existed, and the history of technology as if it
were utterly innocent of racial significance.”30
Daniels’ (2009) Cyber Racism illuminates how “white supremacy has
entered the digital era” while acknowledging how those “excluded by
the white-dominated mainstream media” also use the Internet for
grassroots organizing and antiracist discourse.31 In so doing, she
challenges both those who say that technology is only a “source of
danger” when it comes to the active presence of White supremacists
online and those who assume that technology is “inherently
democratizing.”32 Daniels echoes Nakamura’s (2002, 2008)
frustration with how race remains undertheorized in Internet studies
and urges more attention to the technology of structural racism. In
line with the focus on glitches, researchers tend to concentrate on how
the Internet perpetuates or mediates racial prejudice at the individual
level rather than analyze how racism shapes infrastructure and design.
And, while Daniels does not address this problem directly, an
investigation of how algorithms perpetuate or disrupt racism should
be considered in any study of discriminatory design.
Architecture and Algorithms
On a recent visit that I made to University of California at San Diego,
my hosts explained that the design of the campus made it almost
impossible to hold large outdoor gatherings. The “defensive”
architecture designed to prevent skateboarding and cycling in the
interest of pedestrians also deliberately prevented student protests at a
number of campuses following the Berkeley free speech protests in the
mid-1960s. This is not so much a trend in urban planning as an
ongoing feature of stratified societies. For some years now, as I have
been writing and thinking about discriminatory design of all sorts, I
keep coming back to the topic of public benches: benches I tried to lie
down on but was prevented because of intermittent arm rests, then
benches with spikes that retreat after you feed the meter, and many
more besides.
Like the discriminatory designs we are exploring in digital worlds,
hostile architecture can range from the more obvious to the more
insidious – like the oddly shaped and artistic-looking bench that
makes it uncomfortable but not impossible to sit for very long.
Whatever the form, hostile architecture reminds us that public space is
a permanent battleground for those who wish to reinforce or challenge
hierarchies. So, as we explore the New Jim Code, we can observe
connections in the building of physical and digital worlds, even
starting with the use of “architecture” as a common metaphor for
describing what algorithms – those series of instructions written and
maintained by programmers that adjust on the basis of human
behavior – build. But, first, let’s take a quick detour …
The era commonly called “Jim Crow” is best known for the system of
laws that mandated racial segregation and upheld White supremacy in
the United States between 1876 and 1965. Legal codes, social codes,
and building codes intersected to keep people separate and unequal.
The academic truism that race is “constructed” rarely brings to mind
these concrete brick and mortar structures, much less the digital
structures operating today. Yet if we consider race as itself a
technology, as a means to sort, organize, and design a social structure
as well as to understand the durability of race, its consistency and
adaptability, we can understand more clearly the literal architecture of
power.
Take the work of famed “master builder” Robert Moses, who in the
mid-twentieth century built hundreds of structures, highways, bridges,
stadiums, and more, prioritizing suburbanization and upper-middle-
class mobility over public transit and accessibility to poor and
working-class New Yorkers. In a now iconic (yet still disputed) account
of Moses’ approach to public works, science and technology studies
scholar Langdon Winner describes the low-hanging overpasses that
line the Long Island parkway system. In Winner’s telling, the design
prevented buses from using the roads, which enabled predominantly
White, affluent car owners to move freely, while working-class and
non-White people who relied on buses were prevented from accessing
the suburbs and the beaches. And while the veracity of Winner’s
account continues to be debated, the parable has taken on a life of its
own, becoming a narrative tool for illustrating how artifacts “have
politics.”33
For our purpose, Moses’ bridges symbolize the broader architecture of
Jim Crow. But, whereas Jim Crow laws explicitly restricted Black
people from numerous “White only” spaces and services, the physical
construction of cities and suburbs is central to the exercise of racial
power, including in our postcivil rights era. And, while some scholars
dispute whether Moses intended to exclude Black people from New
York suburbs and beaches, one point remains clear: the way we
engineer the material world reflects and reinforces (but could also be
used to subvert) social hierarchies.
Yet plans to engineer inequity are not foolproof. In April 2018 a group
of high school students and their chaperones returning from a spring
break trip to Europe arrived at Kennedy Airport and boarded a charter
bus that was headed to a Long Island shopping center where parents
waited to pick up their kids. As they drove to the mall, the bus driver’s
navigation system failed to warn him about the low-hanging bridges
that line the Long Island parkway and the bus slammed violently into
the overpass, crushing the roof, seriously wounding six, and leaving
dozens more injured. As news reports pointed out, this was only the
latest of hundreds of similar accidents that happened over the years,
despite numerous warning signs and sensor devices intended to alert
oncoming traffic of the unusually low height of overpasses. Collateral
damage, we might say, is part and parcel of discriminatory design.
From what we know about the people whom city planners have tended
to prioritize in their designs, families such as the ones who could send
their children to Europe for the spring break loom large among them.
But a charter bus with the roof shaved off reminds us that tools of
social exclusion are not guaranteed to impact only those who are
explicitly targeted to be disadvantaged through discriminatory design.
The best-laid plans don’t necessarily “stay in their lane,” as the saying
goes. Knowing this, might it be possible to rally more people against
social and material structures that immobilize some to the benefit of
others? If race and other axes of inequity are constructed, then
perhaps we can construct them differently?
When it comes to search engines such as Google, it turns out that
online tools, like racist robots, reproduce the biases that persist in the
social world. They are, after all, programmed using algorithms that are
constantly updated on the basis of human behavior and are learning
and replicating the technology of race, expressed in the many different
associations that the users make. This issue came to light in 2016,
when some users searched the phrase “three Black teenagers” and
were presented with criminal mug shots. Then when they changed the
phrase to “three White teenagers,” users were presented with photos
of smiling, go-lucky youths; and a search for “three Asian teenagers”
presented images of scantily clad girls and women. Taken together,
these images reflect and reinforce popular stereotypes of Black
criminality, White innocence, or Asian women’s sexualization that
underpin much more lethal and systemic forms of punishment,
privilege, and fetishism respectively.34 The original viral video that
sparked the controversy raised the question “Is Google being racist?,”
followed by a number of analysts who sought to explain how these
results were produced:
The idea here is that computers, unlike people, can’t be racist but
we’re increasingly learning that they do in fact take after their
makers … Some experts believe that this problem might stem
from the hidden biases in the massive piles of data that
algorithms process as they learn to recognize patterns …
reproducing our worst values.35
According to the company, Google itself uses “over 200 unique signals
or ‘clues’ that make it possible to guess what you might be looking
for.”36 Or, as one observer put it, “[t]he short answer to why Google’s
algorithm returns racist results is that society is racist.”37 However,
this does not mean that we have to wait for a social utopia to float
down from the clouds before expecting companies to take action. They
are already able to optimize online content in ways that mitigate bias.
Today, if you look up the keywords in Noble’s iconic example, the
phrase “Black girls” yields images of Black Girls Code founder
Kimberly Bryant and #MeToo founder Tarana Burke, along with
images of organizations like Black Girls Rock! (an awards show) and
Black Girls Run (a wellness movement). The technical capacity was
always there, but social awareness and incentives to ensure fair
representation online were lacking. As Noble reports, the pornography
industry has billions of dollars to throw at companies in order to
optimize content, so advertising cannnot continue to be the primary
driver of online content. Perhaps Donald Knuth’s proverbial warning
is true: “premature omptimization is the root of all evil.”38 And so the
struggle to democratize information gateways continues.39
A number of other examples illustrate algorithmic discrimination as
an ongoing problem. When a graduate student searched for
“unprofessional hairstyles for work,” she was shown photos of Black
women; when she changed the search to “professional hairstyles for
work,” she was presented with photos of White women.40 Men are
shown ads for high-income jobs much more frequently than are
women, and tutoring for what is known in the United States as the
Scholastic Aptitude Test (SAT) is priced more highly for customers in
neighborhoods with a higher density of Asian residents: “From retail
to real estate, from employment to criminal justice, the use of data
mining, scoring and predictive software … is proliferating … [And]
when software makes decisions based on data, like a person’s zip code,
it can reflect, or even amplify, the results of historical or institutional
discrimination.”41
A team of Princeton researchers studying associations made with
Black-sounding names and White-sounding names confirmed findings
from employment audit studies42 to the effect that respondents make
negative associations with Black names and positive associations with
White ones. Caliskan and colleagues show that widely used language-
processing algorithms trained on human writing from the Internet
reproduce human biases along racist and sexist lines.43 They call into
question the assumption that computation is pure and unbiased,
warning that, “if we build an intelligent system that learns enough
about the properties of language to be able to understand and produce
it, in the process it will also acquire historic cultural associations, some
of which can be objectionable. Already, popular online translation
systems incorporate some of the biases we study … Further concerns
may arise as AI is given agency in our society.”44 And, as we shall see
in the following chapters, the practice of codifying existing social
prejudices into a technical system is even harder to detect when the
stated purpose of a particular technology is to override human
prejudice.
Notes
1. Merriam-Webster Online, n.d.
2. Personal interview conducted by the author with Princeton digital
humanities scholar Jean Bauer, October 11, 2016.
3. See references to “digital gentrification” in “White Flight and Digital
Gentrification,” posted on February 28 at
https://untsocialmedias13.wordpress.com/2013/02/28/white-
flight-and-digital-gentrification by jalexander716.
4. Sampson 2009.
5. As Noble (2018, p. 10) writes, “[a]lgorithmic oppression is not just a
glitch in the system but, rather, is fundamental to the operating
system of the web.”
6. Russell and Vinsel 2016.
7. See the conference “Dismantling Predictive Policing in Los Angeles,”
May 8, 2018, at https://stoplapdspying.org/wp-
content/uploads/2018/05/Before-the-Bullet-Hits-the-Body-May-
8-2018 .
8. “Dismantling predictive policing in Los Angeles,” pp. 38–9.
9. Ferguson 2017.
10. Angwin et al. 2016.
11. According to Sharpe (2016, p. 106), “the weather necessitates
changeability and improvisation,” which are key features of
https://stoplapdspying.org/wp-content/uploads/2018/05/Before-the-Bullet-Hits-the-Body-May-8-2018
innovative systems that adapt, in this case, to postracial norms
where racism persists through the absence of race.
12. Meredith Broussard, data journalist and author of Artificial
Unintelligence, explains: “The fact that nobody at Northpointe
thought that the questionnaire or its results might be biased has to
do with technochauvinists’ unique worldview. The people who
believe that math and computation are ‘more objective’ or ‘fairer’
tend to be the kind of people who think that inequality and
structural racism can be erased with a keystroke. They imagine that
the digital world is different and better than the real world and that
by reducing decisions to calculations, we can make the world more
rational. When development teams are small, like-minded, and not
diverse, this kind of thinking can come to seem normal. However, it
doesn’t move us toward a more just and equitable world”
(Broussard 2018, p. 156).
13. Brayne 2014.
14. As Wang (2018, p. 236) puts it, “the rebranding of policing in a way
that foregrounds statistical impersonality and symbolically removes
the agency of individual officers is a clever way to cast police
activity as neutral, unbiased, and rational. This glosses over the fact
that using crime data gathered by the police to determine where
officers should go simply sends police to patrol the poor
neighborhoods they have historically patrolled when they were
guided by their intuitions and biases. This ‘new paradigm’ is not
merely a reworking of the models and practices used by law
enforcement, but a revision of the police’s public image through the
deployment of science’s claims to objectivity.”
15. I am indebted to Naomi Murakawa for highlighting for me the
strained way in which scholars and criminologists tend to discuss
“unwarranted disproportion,” as if the line between justified and
unjustified is self-evident rather than an artifact of racist policing,
with or without the aid of crime prediction software. See Murakawa
2014.
16. Wang 2018, p. 241.
17. Wang 2018, p. 237.
18. From sci fi quotes.net, http://scifiquotes.net/quotes/123_Dont-
Worry-About-the-Vase; emphasis added.
19. Collins 1990, p. 227.
20. Collins 1990, p. 230.
21. Goodyear 2013.
22. Goyette 2014.
23. Associated Press 2006.
24. Daniels 2013, p. 709.
25. Golash-Boza 2016.
26. Feagin and Elias, 2013, p. 936.
27. Bonilla-Silva 2006, p. 9.
28. Hamilton and Ture 1967, p. 38. Scholar of African American
studies Keeanga-Yamahtta Taylor describes the term “institutional
racism” as prescient, noting that “it is the outcome that matters, not
the intentions of the individuals involved” (Taylor 2016, p. 8).
29. Omi and Winant 1994, pp. 137–8.
30. Sinclair 2004, p. 1; cf. Daniels 2013, p. 696.
31. Daniels 2009, p. 2.
32. Daniels 2009, p. 4.
33. Winner 1980.
34. Helm 2016.
35. Pearson 2016a.
36. See “How search algorithms work,”
https://www.google.co.uk/insidesearch/howsearchworks/algorithms.html
http://quotes.net
http://scifiquotes.net/quotes/123_Dont-Worry-About-the-Vase
https://www.google.co.uk/insidesearch/howsearchworks/algorithms.html
37. See Chiel 2016; in its own defense, the company explained thus:
“‘Our image search results are a reflection of content from across
the web, including the frequency with which types of images appear
and the way they’re described online,’ a spokesperson told the
Mirror. This means that sometimes unpleasant portrayals of
sensitive subject matter online can affect what image search results
appear for a given query. These results don’t reflect Google’s own
opinions or beliefs – as a company, we strongly value a diversity of
perspectives, ideas and cultures.”
38. Roberts 2018.
39. Sociologist Zeynep Tufekci (2019) puts it thus: “These companies
– which love to hold themselves up as monuments of free
expression – have attained a scale unlike anything the world has
ever seen; they’ve come to dominate media distribution, and they
increasingly stand in for the public sphere itself. But at their core,
their business is mundane: They’re ad brokers. To virtually anyone
who wants to pay them, they sell the capacity to precisely target our
eyeballs. They use massive surveillance of our behavior, online and
off, to generate increasingly accurate, automated predictions of
what advertisements we are most susceptible to and what content
will keep us clicking, tapping, and scrolling down a bottomless
feed.”
40. Chiel 2016.
41. Kirchner 2015a.
42. Bertrand and Mullainathan 2003.
43. Pearson 2016a.
44. Caliskan et al. 2017, p. 186.
3
Coded Exposure
Is Visibility a Trap?
I think my Blackness is interfering with the computer’s ability to
follow me.
Webcam user1
EXPOSURE
the amount of light per unit area
the disclosure of something secret
the condition of being unprotected
the condition of being at risk of financial loss
the condition of being presented to view or made known.2
In the short-lived TV sitcom Better off Ted, the writers parody the
phenomena of biased technology in an episode titled “Racial
Sensitivity.” This episode presents the corporation where the show
takes place installing a “new state of the art system that’s gonna save
money,” but employees soon find there is a “glitch in the system that
keeps it from recognizing people with dark skin.”3 When the show’s
protagonist confronts his boss, suggesting the sensors are racist, she
insists otherwise:
The company’s position is that it’s actually the opposite of racist
because it’s not targeting black people, it’s just ignoring them.
They insist that the worst people can call it is indifferent … In the
meantime, they’d like to remind everyone to celebrate the fact
that it does see Hispanics, Asians, Pacific Islanders, and Jews.4
The show brilliantly depicts how the default Whiteness of tech
development, a superficial corporate diversity ethos, and the
prioritization of efficiency over equity work together to ensure that
innovation produces social containment.5 The fact that Black
employees are unable to use the elevators, doors, and water fountains
or turn the lights on is treated as a minor inconvenience in service to a
greater good. The absurdity goes further when, rather than removing
the sensors, the company “blithely installs separate, manually
operated drinking fountains for the convenience of the black
employees,”6 an incisive illustration of the New Jim Code wherein tech
advancement, posed as a solution, conjures a prior racial regime in the
form of separate water fountains.
Eventually the company sees the error of its ways and decides to hire
minimum-wage-earning White employees to follow Black employees
around the building, so that the sensors will activate. But then the
legal team determines that, for each new White worker, they must hire
an equal number of Black workers, and on and on, in a spiraling quota
that ends when the firm finally decides to reinstall the old sensors.
Playing off of the political anxieties around reverse discrimination and
affirmative action, the episode title “Racial Sensitivity” – a formula
that usually designates a charge brought against Black people who call
attention to racism – is a commentary on the company’s insensitivity
and on the absurdity of its fixes. The writers seem to be telling us that
more, not less, sensitivity is the solution to the technological and
institutional dilemma of coded inequity. The episode also manages to
illustrate how indifference to Blackness can be profitable within the
logic of racial capitalism until the social costs become too high to
maintain. 7
Multiply Exposed
Some technologies fail to see Blackness, while others render Black
people hypervisible and expose them to systems of racial surveillance.8
Exposure, in this sense, takes on multiple meanings.9 Exposing film is
a delicate process – artful, scientific, and entangled in forms of social
and political vulnerability and risk. Who is seen and under what terms
holds a mirror onto more far-reaching forms of power and inequality.
Far from being neutral or simply aesthetic, images have been one of
the primary weapons in reinforcing and opposing social oppression.
From the development of photography in the Victorian era to the
image-filtering techniques in social media apps today, visual
technologies and racial taxonomies fashion each other.10
Photography was developed as a tool to capture visually and classify
human difference; it also helped to construct and solidify existing
technologies, namely the ideas of race and assertions of empire, which
required visual evidence of stratified difference.11 Unlike older school
images, such as the paintings and engravings of exotic “others” that
circulated widely before the Victorian period, photographs held an
allure of objectivity, a sense that such images “were free from the bias
of human imagination … a neutral reflection of the world.”12 Yet such
reflections were fabricated according to the demands and desires of
those who exercised power and control over others. Some photographs
were staged, of course, to reflect White supremacist desires and
anxieties. But race as a means of sorting people into groups on the
basis of their presumed inferiority and superiority was staged in and of
itself, long before becoming the object of photography.
What of the modern photographic industry? Is it more democratic and
value-neutral than image was in previous eras? With the invention of
color photography, the positive bias toward lighter skin tones was built
into visual technologies and “presented to the public as neutral.”
Neutrality comes in the idea that “physics is physics,” even though the
very techniques of color-balancing an image reinforce a dominant
White ideal.13 And when it comes to the latest digital techniques, social
and political factors continue to fashion computer-generated images.
In this visual economy, race is not only digitized but heightened and
accorded greater value.
This chapter traces the complex processes involved in “exposing” race
in and through technology and the implications of presenting partial
and distorted visions as neutral and universal. Linking historical
precedents with contemporary techniques, the different forms of
“exposure” noted in the epigraph serve as a touchstone for considering
how the act of viewing something or someone may put the object of
vision at risk. This kind of scopic vulnerability is central to the
experience of being racialized.
In many ways, philosopher and psychiatrist Frantz Fanon’s classic text
Black Skin, White Masks is a meditation on scopic vulnerability. He
describes the experience of being looked at, but not truly seen, by a
White child on the streets of Paris:
“Look, a Negro!”
It was an external stimulus that flicked over me as I passed by.
I made a tight smile.
“Look, a Negro!” It was true. It amused me.
“Look, a Negro!” The circle was drawing a bit tighter. I made no secret
of my amusement.
“Mama, see the Negro! I’m frightened!” Frightened! Frightened! Now
they were beginning to be afraid of me. I made up my mind to laugh
myself to tears, but laughter had become impossible.
This story reveals us that a key feature of Black life in racist societies is
the constant threat of exposure and of being misread; and that being
exposed is also a process of enclosure, a form of suffocating social
constriction.
In a beautiful essay titled “Skin Feeling,” literary scholar Sofia Samatar
reminds us: “The invisibility of a person is also the visibility of a race
… to be constantly exposed as something you are not.”14 Yet, in the
distorted funhouse reflection of racist conditioning, the White children
are the ones who fancy themselves as being at risk. Fanon’s experience
on the streets of Paris foreshadows the technologically mediated forms
of exposure that proliferate Black life today. Whether we are talking
about the widespread surveillance systems built into urban landscapes
or the green light sitting above your laptop screen, detection and
recognition are easily conflated when the default settings are distorted
by racist logics.15
Finally, as it circulates in the domain of finance, the term “exposure”
quantifies how much one stands to lose in an investment. If, as legal
scholar Cheryl I. Harris argues, Whiteness is a form of property and if
there is a “possessive investment in whiteness” (as sociologist George
Lipsitz describes it), then visual technologies offer a site where we can
examine how the value of Whiteness is underwritten through multiple
forms of exposure by which racialized others are forcibly and
fictitiously observed but not seen. That said, photography has also
been a powerful tool to invest in Blackness. Take cultural studies
scholar and media activist Yaba Blay’s work on the social, psychic, and
public health harms associated with skin bleaching. In addition to
scholarly analysis, she created a media campaign called Pretty.Period,
which counters the faux compliment that dark-skinned women must
routinely endure: “you’re pretty for a dark-skinned girl.” By exposing
the gendered racism coded in the qualifier, Blay responds “No, we’re
pretty PERIOD.”16 The campaign has produced an expansive archive
with thousands of striking images of dark-skinned women of all ages
across the African diaspora whose beauty is not up f