T ECHNOLOGY
The Perils of Using Technology to Solve Other
People’s Problems
STEPHEN LAM / REUTERS
I found Shane Snow’s essay on prison reform — “How Soylent and Oculus Could
Fix the Prison System” — through hate-linking.
Friends of mine hated the piece so much that normally-articulate people were at a
loss for words.
What will it take to design socio-technical systems that actually work?
ETHAN ZUCKERMAN JUN 23, 2016
Susie Cagle
@susie_c
A real person thought it would be a good idea to write this and
post it on the Internet.
Susie Cagle @susie_c · Jan 30, 2016
NOPE maneatingrobot.com/96/prison-refo…
Subscribe for less than $1 per week
With a recommendation like that, how could I pass it up? And after reading it, I
tweeted my astonishment to Susie, who told me, “I write comics, but I don’t know
how to react to this in a way that’s funny.” I realized that I couldn’t offer an
appropriate reaction in 140 characters either. The more I think about Snow’s essay,
the more it looks like the outline for a class on the pitfalls of solving social
problems with technology, a class I’m now planning on teaching this coming fall.
Using Snow’s essay as a jumping off point, I want to consider a problem that’s been
on my mind a great deal since joining the MIT Media Lab five years ago: How do
we help smart, well-meaning people address social problems in ways that make the
world better, not worse?
In other words, is it possible to get beyond both a naïve belief that the latest
technology will solve social problems—and a reaction that rubbishes any attempt to
offer novel technical solutions as inappropriate, insensitive, and misguided? Can
we find a synthesis in which technologists look at their work critically and work
closely with the people they’re trying to help in order to build sociotechnical
systems that address hard problems?
Obviously, I think this is possible — if really, really hard — or I wouldn’t be teaching
at an engineering school. But before considering how we overcome a naïve faith in
technology, let’s examine Snow’s suggestion. It’s a textbook example of a solution
that’s technically sophisticated, simple to understand, and dangerously wrong.
* * *
Though he may be best known as co-founder of the content-marketing platform
“Contently,” Shane Snow describes himself as a “journalist, geek and best-selling
author.” That last bit comes from his book Smartcuts: How Hackers, Innovators, and
Icons Accelerate Success, which offers insights on how “innovators and icons” can
“rethink convention” and break “rules that are not rules.”
298 2:21 PM – Jan 30, 2016
244 people are talking about this
Subscribe for less than $1 per week
That background may help readers understand where Snow is coming from. His
blog is filled with plainspoken and often entertaining explanations of complex
systems followed by apparently straightforward conclusions — evidently, burning
coal and natural gas to generate electricity is a poor idea, so oil companies should
be investing in solar energy. Fair enough.
Some of these explorations are more successful than others. In Snow’s essay about
prison reform, he identifies violence, and particularly prison rape, as the key
problem to be solved, and offers a remedy that he believes will lead to cost savings
for taxpayers as well: all prisoners should be incarcerated in solitary confinement,
fed only Soylent meal replacement drink through slots in the wall, and all
interpersonal interaction and rehabilitative services will be provided in Second Life
using the Oculus Rift virtual reality system. Snow’s system eliminates many
features of prison life — “cell blocks, prison yards, prison gyms, physical
interactions with other prisoners, and so on.” That’s by design, he explains. “Those
are all current conventions in prisons, but history is clear: innovation happens
when we rethink conventions and apply alternative learning or technology to old
problems.”
An early clue that Snow’s rethinking is problematic is that his proposed solution
looks a lot like “administrative segregation,” a technique used in prisons to
separate prisoners who might be violent or disruptive from the general population
by keeping them in solitary confinement 23 hours a day. The main problem with
administrative segregation or with what’s known as the SHU (the “secure housing
unit” in supermax prisons) is that inmates tend to experience serious mental health
problems connected to sustained isolation.
“Deprived of normal human interaction, many segregated prisoners reportedly
suffer from mental health problems including anxiety, panic, insomnia, paranoia,
aggression and depression,” explains the social psychologist Craig Haney in a
paper for the journal Crime & Delinquency. Shaka Senghor, a writer and activist
who was formerly incarcerated for murder, explains that many inmates in solitary
confinement have underlying mental health issues, and the isolation damages
even the sound of mind. Solitary confinement, he says, is “one of the most barbaric
and inhumane aspects of our society.”
Due to the psychological effects of being held in isolation, the UN Special
Rapporteur on Torture has condemned the use of sustained solitary confinement,
and called for a ban on solitary confinement for people under 18 years old. Rafael
Sperry of Architects/Designers/Planners for Social Responsibility has called forSubscribe for less than $1 per week
architects to stop designing prisons that support solitary confinement—the
argument being that they enable violations of human rights. Snow’s solution may
be innovative, but it’s also a large-scale human rights violation.
Snow and supporters might argue that he’s not trying to deprive prisoners of
human contact, but wants to give them a new, safer form of contact. But there’s
essentially no research on the health effects of sustained exposure to head-
mounted virtual reality.
Would prisoners be forced to choose between simulator sickness or isolation? What
are the long-term effects on vision of immersive VR displays? Will prisoners
experience visual exhaustion through vergence-accommodation, a yet-to-be-
solved problem of eye and brain due to problems focusing on objects that are very
nearby but appear to be distant? Furthermore, will contact with humans through
virtual worlds mitigate the mental problems prisoners face in isolation, or
exacerbate them? How do we answer any of these questions ethically, given the
restrictions we’ve put on experimenting on prisoners in the wake of Nazi abuse of
concentration camp prisoners.
How does an apparently intelligent person end up suggesting a solution that might,
at best, constitute unethical medical experiments on prisoners? How does a well-
meaning person suggest a remedy that likely constitutes torture?
* * *
The day I read Snow’s essay, I happened to be leading a workshop on social change
during the Yale Civic Leadership conference. Some of the students I worked with
were part of the movement to rename Yale’s Calhoun College, and all were smart,
thoughtful, creative, and openminded.
The workshop I led encourages thinkers to consider different ways they might
make social change, not just through electing good leaders and passing just laws.
Our lab at MIT examines the idea that changemakers can use different levers of
change, including social norms, market forces, and new technologies to influence
society, and the workshop I led asks students to propose novel solutions to long-
standing problems featuring one of these levers of change. With Snow’s essay in
mind, I asked the students to take on the challenge of prison reform.
Oddly, none of their solutions involved virtual reality isolation cells. In fact, most
of the solutions they proposed had nothing to do with prisons themselves. Instead,
their solutions focused on over-policing of black neighborhoods, America’sSubscribe for less than $1 per week
aggressive prosecutorial culture that encourages those arrested to plead guilty,
legalization of some or all drugs, reform of sentencing guidelines for drug crimes,
reforming parole and probation to reduce re-incarceration for technical offenses,
and building robust re-entry programs to help ex-cons find support, housing, and
gainful employment.
In other words, when Snow focuses on making prison safer and cheaper, he’s
working on the wrong problem.
Yes, prisons in the United State could be safer and cheaper. But the larger problem
is that the U.S. incarcerates more people than any other nation on Earth. With five
percent of the world’s population, we are responsible for 25 percent of the world’s
prisoners.
Snow may see his ideas as radical and transformative, but they’re fundamentally
conservative — he tinkers with the conditions of confinement without questioning
whether incarceration is how our society should solve problems of crime and
addiction. As a result, his solutions can only address a facet of the problem, not the
deep structural issues that lead to the problem in the first place.
Many hard problems require you to step back and consider whether you’re solving
the right problem. If your solution only mitigates the symptoms of a deeper
problem, you may be calcifying that problem and making it harder to change.
Cheaper, safer prisons make it easier to incarcerate more Americans. They also
avoid addressing fundamental problems of addiction, joblessness, mental illness,
and structural racism.
* * *
Some of my hate-linking friends began their eye-rolling about Snow’s article with
the title, which references two of Silicon Valley’s most hyped technologies. With
the current focus on the U.S. as an “innovation economy,” it’s common to read
essays predicting the end of a major social problem due to a technical innovation.
Bitcoin will end poverty in the developing world by enabling inexpensive money
transfers. Wikipedia and One Laptop Per Child will educate the world’s poor
without need for teachers or schools. Self driving cars will obviate public transport
and reshape American cities.
The writer Evgeny Morozov has offered a sharp and helpful critique to this mode of
thinking, which he calls “solutionism.” Solutionism demands that we focus on
problems that have “nice and clean technological solution at our disposal.” In hisSubscribe for less than $1 per week
book, To Save Everything, Click Here, Morozov savages ideas like Snow’s, regardless
of whether they are meant as thought experiments or serious policy proposals.
(Indeed, one worry I have in writing this essay is taking Snow’s ideas too seriously,
as Morozov does with many of the ideas he lambastes in his book.)
The problem with the solutionist critique, though, is that it tends to remove
technological innovation from the problem-solver’s toolkit. In fact, technological
development is often a key component in solving complex social and political
problems, and new technologies can sometimes open a previously intractable
problem. The rise of inexpensive solar panels may be an opportunity to move
nations away from a dependency on fossil fuels and begin lowering atmospheric
levels of carbon dioxide, much as developments in natural gas extraction and
transport technologies have lessened the use of dirtier fuels like coal.
But it’s rare that technology provides a robust solution to a social problem by itself.
Successful technological approaches to solving social problems usually require
changes in laws and norms, as well as market incentives to make change at scale.
I installed solar panels on the roof of my house last fall. Rapid advances in panel
technology made this a routine investment instead of a luxury, and the existence of
competitive solar installers in our area meant that market pressures kept costs low.
But the panels were ultimately affordable because federal and state legislation
offered tax rebates for their purchase, and because Massachusetts state law
rewards me with solar credits for each megawatt I produce—which I can sell to
utilities through an online marketplace because energy companies are legally
mandated to produce a percentage of their total power output via solar generation.
And while there are powerful technological, economic, and legal forces pushing us
toward solar energy, the most powerful driver may be the social, normative
pressure of seeing our neighbors install solar panels—leaving us feeling like we
weren’t doing our part.
My Yale students who tried to use technology as their primary lever for reforming
U.S. prisons had a difficult time. One team offered the idea of an online social
network that would help recently released prisoners connect with other ex-
offenders to find support, advice, and job opportunities in the outside world.
Another looked at the success of Bard College’s remarkable program to help
inmates earn bachelor’s degrees, and wondered whether online learning
technologies could allow similar efforts to reach thousands more prisoners. But
many of the other promising ideas that arose in our workshops had a technological
component — given the ubiquity of mobile phones, why can’t ex-offenders haveSubscribe for less than $1 per week
their primary contact with their parole officers via mobile phones? And given the
rise of big-data techniques used for “smart policing,” can we better review patterns
of policing—including identifying and eliminating cases where officers are over-
focusing on some communities?
The temptation of technology is that it promises fast and neat solutions to social
problems. It usually fails to deliver. The problem with Morozov’s critique, though,
is that technological solutions, combined with other paths to change, can
sometimes turn intractable problems into solvable ones. The key is to understand
technology’s role as a lever of change in conjunction with complementary levers.
* * *
Shane Snow introduces his essay on prison reform not with statistics about the
ineffectiveness of incarceration in reducing crime, but with his fear of being sent to
prison. Specifically, he fears prison rape, a serious problem which he radically
overestimates: “My fear of prison also stems from the fact that some 21 percent of
U.S. prison inmates get raped or coerced into giving sexual favors to terrifying
dudes named Igor.” Snow is religious about footnoting his essays, but not as good
at reading the sources he cites — the report he uses to justify his fear of “Igor” (nice
job avoiding accusations of overt racism there, Shane) indicates that 2.91 of 1,000
incarcerated persons experienced sexual violence, or 0.291 percent, not 21 percent.
Perhaps for Snow, isolation for years at a time, living vicariously through a VR
headset while sipping an oat flour smoothie would be preferable to time in the
prison yard, mess hall, workshop, or classroom. But there’s no indication that Snow
has talked to any current or ex-offenders about their time in prison, or about the
ways in which encounters with other prisoners led them to faith, to mentorship, or
to personal transformation.
The people Shane imagines are so scary, so other, that he can’t imagine interacting
with them, learning from them, or anything but being violently assaulted by them.
No wonder he doesn’t bother to ask what aspects of prison life are most and least
livable, and which would benefit most from transformation.
Much of my work focuses on how technologies spread across national, religious
and cultural borders, and how they are transformed by that spread. Cellphone
networks believed that pre-paid scratch cards were an efficient way to sell phone
minutes at low cost — until Ugandans started using the scratch off codes to send
money via text message in a system called Sente, inventing practical mobile money
in the process. Facebook believes its service is best used by real individuals usingSubscribe for less than $1 per week
their real names, and goes to great lengths to remove accounts it believes to be
fictional. But when Facebook comes to a country like Myanmar, where it is seen as
a news service, not a social networking service, phone shops specializing in setting
up accounts using fake names and phone numbers render Facebook’s preferences
null and void.
Smart technologists and designers have learned that their preferences are seldom
their users’ preferences, and companies like Intel now employ brilliant
ethnographers to discover how tools are used by actual users in their homes and
offices. Understanding the wants and needs of users is important when you’re
designing technologies for people much like yourself, but it’s utterly critical when
designing for people with different backgrounds, experiences, wants, and needs.
Given that Snow’s understanding of prison life seems to come solely from binge-
watching Oz, it’s virtually guaranteed that his proposed solution will fail in
unanticipated ways when used by real people.
* * *
Of the many wise things my Yale students said during our workshop was a student
who wondered if he should be participating at all. “I don’t know anything about
prisons, I don’t have family in prison. I don’t know if I understand these problems
well enough to solve them, and I don’t know if these problems are mine to solve.”
Talking about the workshop with my friend and colleague Chelsea Barabas, she
asked the wonderfully deep question, “Is it ever okay to solve another person’s
problem?”
On its surface, the question looks easy to answer. We can’t ask infants to solve
problems of infant mortality, and by extension, it seems unwise to let kindergarten
students design educational policy or demand that the severely disabled design
their own assistive technologies.
But the argument is more complicated when you consider it more closely. It’s
difficult if not impossible to design a great assistive technology without working
closely, iteratively, and cooperatively with the person who will wear or use it. My
colleague Hugh Herr designs cutting-edge prostheses for U.S. veterans who’ve lost
legs, and the centerpiece of his lab is a treadmill where amputees test his limbs,
giving him and his students feedback about what works, what doesn’t, and what
needs to change. Without the active collaboration with the people he’s trying to
help, he’s unable to make technological advances.
Subscribe for less than $1 per week
Disability rights activists have demanded “nothing about us without us,” a slogan
that demands that policies should not be developed without the participation of
those intended to benefit from those policies.
Design philosophies like participatory design and codesign bring this concept to
the world of technology, demanding that technologies designed for a group of
people be designed and built, in part, by those people. Codesign challenges many
of the assumptions of engineering, requiring people who are used to working in
isolation to build broad teams and to understand that those most qualified to offer
a technical solution may be least qualified to identify a need or articulate a design
problem. This method is hard and frustrating, but it’s also one of the best ways to
ensure that you’re solving the right problem, rather than imposing your preferred
solution on a situation.
On the other pole from codesign is an approach to engineering we might
understand as “Make things better by making better things.” This school of
thought argues that while mobile phones were designed for rich westerners, not for
users in developing nations, they’ve become one of the transformative
technologies for the developing world. Frustratingly, this argument is valid, too.
Many of the technologies we benefit from weren’t designed for their ultimate
beneficiaries, but were simply designed well and adopted widely. Shane Snow’s
proposal is built in part on this perspective — Soylent was designed for geeks who
wanted to skip meals, not for prisoners in solitary confinement, but perhaps it
might be preferable to Nutraloaf or other horrors of the prison kitchen.
I’m not sure how we resolve the dichotomy of “with us” versus “better things.” I’d
note that every engineer I’ve ever met believes what she’s building is a better thing.
As a result, strategies that depend on finding the optimum solutions often rely on
choice-rich markets where users can gravitate towards the best solution. In other
words, they don’t work very well in an environment like prison, where prisoners are
unlikely to be given a choice between Snow’s isolation cells and the prison as it
currently stands, and are even less likely to participate in designing a better prison.
Am I advocating codesign of prisons with the currently incarcerated? Hell yeah, I
am. And with ex-offenders, corrections officers, families of prisoners, as well as the
experts who design these facilities today. They’re likely to do a better job than
smart Yale students, or technology commentators.
* * *
Subscribe for less than $1 per week
It is unlikely that anyone is going to invite Shane Snow to redesign a major prison
any time soon, so spending more than 3,000 words urging you to reject his solution
may be a waste of your time and mine. But the mistakes Snow makes are those that
engineers make all the time when they turn their energy and creativity to solving
pressing and persistent social problems. Looking closely at how Snow’s solutions
fall short offers some hope for building better, fairer, and saner solutions.
The challenge, unfortunately, is not in offering a critique of how solutions go
wrong. Excellent versions of that critique exist, from Morozov’s war on
solutionism, to Courtney Martin’s brilliant “The Reductive Seduction of Other
People’s Problems.” If it’s easy to design inappropriate solutions about problems
you don’t fully understand, it’s not much harder to criticize the inadequacy of those
solutions.
What’s hard is synthesis — learning to use technology as part of well-designed
sociotechnical solutions. These solutions sometimes require profound advances in
technology. But they virtually always require people to build complex,
multifunctional teams that work with and learn from the people the technology is
supposed to benefit.
Three students at the MIT Media Lab taught a course last semester called
“Unpacking Impact: Reflecting as We Make.” They point out that the Media Lab
prides itself on teaching students how to make anything, and how to turn what you
make into a business, but rarely teaches reflection about what we make and what it
might mean for society as a whole. My experience with teaching this reflective
process to engineers is that it’s both important and potentially paralyzing, that
once we understand the incompleteness of technology as a path for solving
problems and the ways technological solutions relate to social, market, and legal
forces, it can be hard to build anything at all.
I’m going to teach a new course this fall, tentatively titled “Technology and Social
Change.” It’s going to include an examination of the four levers of social change
Larry Lessig suggests in Code, and which I’ve been exploring as possible paths to
civic engagement. The course will include deep methodological dives into
codesign, and will examine using anthropology as tool for understanding user
needs. It will look at unintended consequences, cases where technology’s best
intentions fail, and cases where careful exploration and preparation led to
technosocial systems that make users and communities more powerful than they
were before.
Subscribe for less than $1 per week
I’m “calling my shot” here for two reasons. One, by announcing it publicly, I’m less
likely to back out of it, and given how hard these problems are, backing out is a real
possibility. And two, if you’ve read this far in this post, you’ve likely thought about
this issue and have suggestions for what we should read and what exercises we
should try in the course of the class — I hope you might be kind enough to share
those with me.
In the end, I’m grateful for Shane Snow’s surreal, Black Mirror vision of the future
prison both because it’s a helpful jumping-off point for understanding how hard it
is to make change well by using technology, and because the U.S. prison system is a
broken and dysfunctional system in need of change. But we need to find ways to
disrupt better, to challenge knowledgeably, to bring the people they hope to benefit
into the process. If you can, please help me figure out how we teach these ideas to
the smart, creative people I work with—people who want to change the world, and
are afraid of breaking it in the process.
We want to hear what you think about this article. Submit a letter to the editor or write
to letters@theatlantic.com.
Subscribe for less than $1 per week
My own comment:
The arrival of many things such as televisions, cellphones, computers, as well as automobiles, were all made possible by technical advancements, which have had a huge effect on global civilization. Occasionally, a new technology will have a historic impact not only on social transformation but also on the entire world.
Teacher comment/repsonse:
Very good points to raise on this article, so here’s my question: would you be surprised to learn that we study what factors drive hesitation on implementing new technologies? More importantly, what do you think it may be important for technology developers to understand about the potential users of a technology’s hesitation?
Reply back to teacher with repsonse:
Student comment/annotation:
How dowe help smart, well-meaning people address social problems in ways that make theworld better, not worse?
Zuckerman brings up a really interesting point here as the more advanced our society becomes, the more people will try helping social problems with their own solutions. It’s an interesting dilemma as everyone now has the ability to contribute to society due to the rapid increase in technology. This may sound like a good thing, but it is almost a guarantee that certain people will address social problems assuming they are helping but in reality, they are just making the problem worse.
Reply back to studen about their comment:
Student Comment/annotation:
How dowe help smart, well-meaning people address social problems in ways that make theworld better, not worse?
This is a very intriguing and interesting problem given at hand. Whenever one takes the chance on addressing a social problem the world goes in shambles and makes the person look bad as well as society. But the issue at hand is how to make the world better and still make yourself have a good reputation while addressing social problems as a smart person.
Reply back to student about their comment: